0

We are a development team who have run into a strange bug in Jira. Trying to clean up the mess from the bug we want to update the dates of our springs in Jira database.

We are using Windows server and we have Postgres installed on it.

I have found the relevant table and when I write

select *
from "AO_60DB71_SPRINT"

find this:


Closed; Complete_Date; END_Date; ID; Name; Rapid_View_ID; Sequence; Started; Start_Date

t;1433318043661;1433226900000;1;"Sprint 1";1;;t;1432190100102 t;1433924067416;-61680144720000;2;"Sprint 2";1;;t;-61681095120000 t;1434528978422;-61679885580000;3;"Sprint 3";1;;t;-61680144780000 t;1435130684508;-61678935480000;4;"Sprint 4";1;;t;-61679540276038 t;1435735227248;-61678337460000;5;"Sprint 5";1;;t;-61679115060000 t;1436340875991;-61677749880000;6;"Sprint 6";1;;t;-61678354663584 t;1436944702756;-61677125820000;7;"Sprint 7";1;;t;-61677730634396 t;1437549239766;-61676517000000;8;"Sprint 8";1;;t;-61677121774120 t;1438154558709;-61675915920000;9;"Sprint 9";1;;t;-61676520745914 t;1438764063437;-61675313460000;10;"Sprint 10";1;;t;-61675918235812 t;1439366509383;-61674701940000;11;"Sprint 11";1;;t;-61675306752010 t;1439970303684;-61674080220000;12;"Sprint 12";1;;t;-61674703008615 f;;1440602460000;13;"Sprint 13";1;;t;1439979707567


The interesting fields here are the date values stored as bigints. A few of these values are positive and the others are negative.

When I look at what the dates represent by writing

select TO_TIMESTAMP("START_DATE" / 1000)
from "AO_60DB71_SPRINT"

"2015-05-21 08:35:00+02"
"0015-05-28 11:28:00+01"
"0015-06-08 11:27:00+01"
"0015-06-15 11:22:04+01"
"0015-06-20 09:29:00+01"
"0015-06-29 04:42:17+01"
"0015-07-06 10:02:46+01"
"0015-07-13 11:10:26+01"
"0015-07-20 10:07:35+01"
"0015-07-27 09:29:25+01"
"0015-08-03 11:20:48+01"
"0015-08-10 11:03:12+01"
"2015-08-19 12:21:47+02"

What I want to achieve is an update to the above column where all dates that are of the year 0015 should be update to the (bigint corresponding) year 2015.

My plan was to something like this:

Select
   "START_DATE",
   EXTRACT(EPOCH FROM INTERVAL '2000 years')*1000 + "START_DATE"
from "AO_60DB71_SPRINT"

But the resulting datatype of the second row is a double.

My questions in the end are

  1. Is it safe to make an update where I insert the double into the bigint column?
  2. If not, what's the missing step in my conversion?
  3. Being totally novice to postgres, How do I make the update?
  4. Do I need to make a commit afterwards?

Thanks in advance

2
  • I only have one question: why would anyone store a date or timestamp in an integer column? Commented Aug 19, 2015 at 17:13
  • I ask myself the same thing! I assume this is a Jira/Atlassian thing. Not sure though. We installed the products, found this oddity and fixed our problem.. Commented Oct 15, 2015 at 14:10

1 Answer 1

2

The field you are updating seems harmless so there is little risk in performing an update to this column.

You can self reference the value for Start_Date in an UPDATE query. And can also make use of the WHERE clause to narrow down the target rows.

Conversions are done using ::type notation.

A query that can do what you want could look like:

UPDATE AO_60DB71_SPRINT
    SET Start_Date = Start_Date + (EXTRACT(EPOCH FROM INTERVAL '2000 years')*1000)::bigint
    WHERE Start_Date < 0;

On success it should return UPDATE <count> and does not require a COMMIT.

Sign up to request clarification or add additional context in comments.

1 Comment

I only had to do some minor modifications du to case sensitivity. Many thanks!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.