I have a server with 150GB of diskspace. Today I uploaded a dataset of 30GB. I cancelled the import due to internet dying, then noticed there was 29GB of space missing in the database (meaning the CSV was uploaded, but not deleted when I broke the operation). When uploading the data once again, it broke again and I lost another ~25GB. Now there isn't enough free space to upload the data.
This is hosted on AWS RDS, Postgres 10.6.
Is there a way to fix this? I read about VACUUM. But will this delete records? I'm hosting at the moment ~70GB of data and don't want to lose any records. What's the best way to go about this?
vacuum fullis what you need, and no it will not delete any rows, it just frees up space