2**63 incorrectly inserted into bigint

Bug #316221 reported by Stewart Smith
2
Affects Status Importance Assigned to Milestone
Drizzle
Fix Released
Medium
Brian Aker
7.0
Fix Released
Medium
Brian Aker

Bug Description

drizzle> CREATE TABLE t1 (col1 bigint);
Query OK, 0 rows affected (0.02 sec)

drizzle> insert into t1 values (9223372036854775808);
Query OK, 1 row affected (0.00 sec)

drizzle> select * from t1;
+----------------------+
| col1 |
+----------------------+
| -9223372036854775808 |
+----------------------+
1 row in set (0.00 sec)

drizzle>

Related branches

Revision history for this message
Stewart Smith (stewart) wrote :

see commented out items in strict.test

Revision history for this message
Jay Pipes (jaypipes) wrote :

Verified as described on latest trunk. This is a serious data problem, so setting to Critical importance. Bug fixer should investigate the MySQL source code for NO_UNSIGNED_SUBTRACTION sql_mode...

Changed in drizzle:
importance: Undecided → Critical
status: New → Confirmed
Changed in drizzle:
assignee: nobody → jaypipes
milestone: none → cirrus
Changed in drizzle:
milestone: cirrus → aloha
Revision history for this message
Jay Pipes (jaypipes) wrote :

So, this turns out to be an interesting dilemma...

9223372036854775808 is converted to -9223372036854775808 during parse, because int64_t is the base type used in all of the various Field::store() methods...not uint64_t. Unfortunately, by the time the number makes its way to Field_int64_t::store(), it is an int64_t with value -9223372036854775808 and is indistinguishable from its original unsigned form.

I'm unsure how to solve this problem without making some fairly radical changes to the way numbers are parsed...

Suggestions welcome; I'm a little stumped for an easy fix to this one...

Revision history for this message
Swany (greenlion) wrote :

Can the int64_t base type be changed to uint64_t, and then in the storage engine cast to signed on storage and retrieval of values?

Revision history for this message
Swany (greenlion) wrote :

I think I misunderstood the problem. Both of the following should have the same error behavior, but the first insert succeeds where it should not, because the parser wraps values before the storage engine sees them.

drizzle> insert into t1 values (18446744073709551615);
Query OK, 1 row affected (0 sec)
drizzle> select * from t1;
+------+
| c1 |
+------+
| -1 |
+------+
1 row in set (0 sec)

 insert into t1 values(pow(2,64));
ERROR 1264 (22003): Out of range value for column 'c1' at row 1

Revision history for this message
Swany (greenlion) wrote :

This causes another problem. The following CTAS should create a DECIMAL column, but instead it creates a BIGINT with a wrong value:

drizzle> create table t3 as select 18446744073709551615;
Query OK, 1 row affected (0.13 sec)
Records: 1 Duplicates: 0 Warnings: 0

drizzle> select * from t3;
+----------------------+
| 18446744073709551615 |
+----------------------+
| -1 |
+----------------------+
1 row in set (0 sec)

drizzle> show create table t3\G
*************************** 1. row ***************************
       Table: t3
Create Table: CREATE TABLE `t3` (
  `18446744073709551615` bigint NOT NULL
) ENGINE=InnoDB
1 row in set (0 sec)

drizzle> create table t4( c1 decimal(22,0)) engine=innodb;
Query OK, 0 rows affected (0.01 sec)

drizzle> insert into t4 values (18446744073709551615);
Query OK, 1 row affected (0 sec)

drizzle> select * from t4;
+----------------------+
| c1 |
+----------------------+
| 18446744073709551615 |
+----------------------+
1 row in set (0 sec)

Revision history for this message
Jay Pipes (jaypipes) wrote :

Yeah, so you've fit the nail on the head, and the reason why I quit trying to solve this bug in the Field_int64_t class...the parser wraps up the numbers in an Item_int class incorrectly...in addition, all integer math is done using int64_t's not uint64_t and the changes required to fix this are major.

Changed in drizzle:
assignee: Jay Pipes (jaypipes) → Brian Aker (brianaker)
milestone: aloha → bell
Changed in drizzle:
milestone: bell → cherry
Changed in drizzle:
milestone: 2010-04-26 → none
importance: Critical → Medium
Revision history for this message
Patrick Crews (patrick-crews) wrote :

This bug is pretty interesting:

create table t1 (a bigint);
drizzle> insert into t1 values (12955000000000000000);
Query OK, 1 row affected (0.05 sec)

drizzle> select * from t1;
+----------------------+
| a |
+----------------------+
| -5491744073709551616 |
+----------------------+
1 row in set (0 sec)

drizzle> select * from t1 where a = 12955000000000000000;
+----------------------+
| a |
+----------------------+
| -5491744073709551616 |
+----------------------+
1 row in set (0 sec)
drizzle> select * from t1 where a = -5491744073709551616;
+----------------------+
| a |
+----------------------+
| -5491744073709551616 |
+----------------------+

MAX value accepted by BIGINT: 18446744073709551615
drizzle> insert into t1 values (18446744073709551615);
Query OK, 1 row affected (0.05 sec)

drizzle> select * from t1;
+------+
| a |
+------+
| -1 |
+------+
1 row in set (0 sec)

While this is a PITA bug, it's pretty seriously broken and bad news for anyone that wants to try to use big numbers, however INSERTs up to the proper max - 9223372036854775807 work as expected.

Revision history for this message
Patrick Crews (patrick-crews) wrote :

The bad data is also making it into the transaction log

Revision history for this message
Lee Bieber (kalebral-deactivatedaccount) wrote :

This has been fixed, am renabling this in strict.test

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.