1

I have the following table:

create table table1  
( 
    id serial, 
    workdate date,
    tanknum1 integer,
    tanknum2 integer,
    tanknum3 integer,
    tank1startingvalue float,
    tank2startingvalue float,
    tank3startingvalue float,
    tank1endvalue float,
    tank2endvalue float,
    tank3endvalue float
); 

And I have inserted the following data:

insert into table1(id, workdate) values (DEFAULT, '01/12/2023'); 

Now I updated it, looking to the first null column of a sequence. For example: I run the following update:

update table1 set tanknum1 = 8 where id = 1;

Now that I have updated it once, I want to create a query to look for the first NULL column and then update it. For example: I already have the tanknum1 non-NULL, so when I update the query using date 01/12/2023 I want it to look for tanknum2 and tanknum3. Tanknum2 is NULL? Ok, then I'll change this value. tanknum2 is non-NULL? All right, let me check tanknum3. Is it null? (and so forth. The edge case is: if i get to tanknum3 and it's already non-NULL I don't want to update it)

I'm doing this to control a gas station inventory and I'm not able to upgrade the Postgre version. This information is arriving to me via .txt file, only with the tank number, date, starting and ending value.

How can I do that I PostgreSQL? (Using Postgre 9.6)

I tried to use COALESCE to do that. However, I get the exact reverse result that I'm looking for. Lacking ideas now (except to use a lot of CASE WHEN on my code - something that I'm trying not to do. Want to do something more elegant)

16
  • Coalesce() seems to be usable for what you described: update table1 set col1 = coalesce(col1,'12') where id = 1; if col1 is not null, then it remains unchanged. If it is, it's set to '12'. Problem is, if you target all columns this way, they'll all get updated independently, not just the first one. Can you provide some additional context on what it is you're trying to achieve with this? What's the actualy problem you're trying to solve with this? Commented Jan 24, 2024 at 17:48
  • There isn't a function to do what you've asked nor should there be. Column order has no semantic significance because a row (tuple in relational theory parlance) is an unordered set of attributes. From your post, it appears that col1, col2, etc., form a repeating group, which should not exist in a normalized database; however, there could be valid reasons to violate this guideline. If col2 is non-NULL, must col1 also be non-NULL? If so, then constraints should be in place to enforce that rule. Please update your post with the use case and data integrity rules. Commented Jan 24, 2024 at 18:28
  • 1
    Using Postgre 9.6 you must be joking. That version was released back in 2016 and is EOL since 2021. Do yourself a favor and upgrade to a recent (supported) version. For identity columns, use IDENTITY: postgresql.org/docs/current/sql-createtable.html Commented Jan 24, 2024 at 18:47
  • What should happen if col1, col2, and col3 are already all non-NULL? It's very important to consider edge cases and possible failure modes and to fully specify expected behavior. COALESCE is essentially syntactic sugar for a CASE expression to find the first non-NULL value, so it's only more elegant by virtue of brevity. Commented Jan 24, 2024 at 19:03
  • 1
    A minimal reproducible example would make this much clearer. Commented Jan 24, 2024 at 20:44

1 Answer 1

1

return the first null value of a row

Turn the row into an array and array_position(row_array,null) returns the first null value of a row. Use that with array : slicing combined with sub-SELECT-based UPDATE syntax and you can avoid the extensive case tree.

Construct an array out of the row, slice it up to where the first null is, concatenate your desired element there with a ||, then add the remaining slice with another ||. Demo at db<>fiddle:

update table1 set (tanknum1,tanknum2,tanknum3)=(
select arr[1],arr[2],arr[3]
from (select coalesce(arr[:array_position(arr,null)-1],arr)
             ||array[9]
             ||arr[array_position(arr,null)+1:] as "arr"
      from (select array[tanknum1,tanknum2,tanknum3] arr) i1
)i2)
where workdate='01/12/2023'
returning *;
id workdate tanknum1 tanknum2 tanknum3
1 2023-12-01 8 9 null

You can achieve a similar effect with jsonb and related functions. That's just to demonstrate it's doable, not that it's a good idea. It's best to state what your actual problem is and how you see this kind of mechanism helping you arrive at a solution.

Sign up to request clarification or add additional context in comments.

2 Comments

This works fine in 9.6. 9.5 Didn't allow omitting slice bounds, but it can be made to work there too. 9.4 didn't yet implement this sort of subselect-based update syntax. Still, please consider an update.
Worth noting that this sort of thing relies on all these columns having the same type. In a jsonb you can have a multi-typed array, but since you need casting on input and output of values in a jsonb, you can just as well cast to text on the way into a regular, native array[] and back from it on their way out.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.