I have a large csv file with nearly 100 columns with varying data types that I would like to load into a sqlite database using sqlalchemy. This will be an ongoing thing where I will periodicly load new data as a new table in the database. This seems like it should be trivial, but I cannot get anything to work.
All the solutions I've looked at so far have defined the columns explicitly when creating the tables.
Here is a minimal example (with far fewer columns) of what I have at the moment.
from sqlalchemy import *
import pandas as pd
values_list = []
url = r"https://raw.githubusercontent.com/amanthedorkknight/fifa18-all-player-statistics/master/2019/data.csv"
df = pd.read_csv(url,sep=",")
df = df.to_dict()
metadata = MetaData()
engine = create_engine("sqlite:///" + r"C:\Users\...\example.db")
connection = engine.connect()
# I would like define just the primary key column and the others be automatically loaded...
t1 = Table('t1', metadata, Column('ID',Integer,primary_key=True))
metadata.create_all(engine)
stmt = insert(t1).values()
values_list.append(df)
results = connection.execute(stmt, values_list)
values_list = []
connection.close()