4

Background

I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).

Proposal

In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.

I realize this is a little ill-formed, and solutions are welcome.

Additional Details

  • files long, not wide
    • millions of lines per hour, spread over 100 files per hour
    • tab seperated, not many columns (~10)
    • fields are short (say < 50 chars per field)
  • queries are on fields, combinations of fields, and can be historical

Drawbacks to various solutions:

(All of these are based on my observations and tests, but I'm open to correction)

BDB

  • has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
  • single writer (if it's possible to get around this, I want to see code!)
  • hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
  • since it only stores strings, there is a serialize / deserialize step

RDBMSes

Wins:

  • flat table model is excellent for querying, indexing

Losses:

  • In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.
1
  • 1
    This is too open-ended. I think some more details about how you want to query the data, what size is the data, etc. would be helpful to determine what methods would make sense here. Commented Jan 21, 2010 at 16:30

5 Answers 5

5

Why reinvent the wheel? By all means, index the files, but use Whoosh, or Lucene, etc.

Edit: you hadn't stated your performance requirements at the time I posted this answer. You're not going to be able to index "millions of rows per hour" with off-the-shelf software.

Sign up to request clarification or add additional context in comments.

6 Comments

I did not see any indication that the OP was looking for full-text search.
My answer refers the OP to two index frameworks that are excellent for field-oriented and faceted search, having no particular bias toward unstructured text.
I'm not at all looking for full text search (and for that I tend prefer Sphinx :)
Which is why my answer points to two frameworks that are excellently suitable for your purposes.
Whoosh doesn't advertise itself as tabular data indexer, but it does seem to work that way. I wonder what advantages it would have over something like MongoDB. I'll do some testing!
|
1

The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.

To reduce the time spent waiting for I/O, your best bet is compression.

Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything.

7 Comments

I disagree. We don't know how the dataset size, we don't know the memory size, we don't know access patterns, etc.
@Bandi-T: Been there, done that. Compression yields dramatic improvements for all use cases except the "all we do is insert and never read the file again" scenario. For the historical archive (write-once-read-never) use case, there's no performance optimization necessary.
@S.Lott: I still disagree. Compression might reduce your datasize (your I/O load) 1:10 - proper indexing might reduce it 1:1000.
@Bandi-T: That depends on the selectivity of the index. An index that isn't very selective ('Male' vs. 'Female') may only reduce the volume of rows by about 1/2. Index cleverness can be undone by query dumbness. Compression will often do more for text files and cannot be undone by a dumb query.
@S.Lott: exactly. do you know whether the OP has text files? do you know, how selective his column values are? which leads us back to the first question (to OP) of "What do you have and what do you want to do?"
|
1

If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.

Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.

That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.

1 Comment

I does indeed sound like tabular data that is well served by a database. In my experience RDBMSs have been too slow to load the data. I will investigate more though.
1

There are a number of file-based DBMSes designed for the sort of data you describe. One of the most widely used is Berkeley DB. It's open source, now owned by Oracle, and comes in C and Java flavors (with Python and other language wrappers for the C version).

Since your data is read-only, though, there are other options. One that comes to mind is CDB. It likewise has Python interfaces, as well as a pure Python reimplementation.

Or, if you really like implementing things yourself, you could always implement SSTables from Google's Bigtable paper. ;)

2 Comments

CDB is news to me! After investigating, it looks like a neat tool, that won't really help me here, since the python bindings seem to require loading the whole thing.
Only the pure-python version does that. The C wrapper uses the regular interface, which works direct from disk.
1

sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.