Best database for inserting records [closed]?

If your main goal is to insert a lot of data in a little time, perhaps the filesystem is all you need. Why not write the data in a file, optionally in a DB-friendly format (csv, xml, ...)? That way you can probably achieve 10 times your performance goal without too much trouble.

And most OSs are robust enough nowadays to prevent data loss on application failures. Edit: As said below, jounaling file systems are pretty much designed so that data is not lost in case of software (or even hardware in case of raid-arrays) failures. ZFS has a good reputation.

Ok, I could use a File and then do a Bulk Insert. Are you sure that I can recover a File in case of Crash on Linux? – Geek Jun 18 '09 at 11:42 zfs or any of the journaled filesystems handle system crashes pretty gracefully – Sam Saffron?

Jun 18 '09 at 11:47.

Postgres provides WAL (Write Ahead Log) which essentially does inserts into RAM until the buffer is full or the system has time to breath. You combine a large WAL cache with a UPS (for safety) and you have very efficient insert performance.

If you can't do SQLite, I'd take a look at Firebird SQL if I were you.

Jun 18 '09 at 11:07 @Geek - My experience is that Firebird will outperform MySQL on a properly designed network setup, as well as generating less network traffic. If it's a local setup then you may be better off with MySQL. Good luck anyway.

– Galwegian Jun 18 '09 at 11:13 Thanks Gal. I would definetely consifer Firebird. I am currently working with SQLite but it is way too slow for reading once the DB reaches a few million rows.

Reading the DB however is a rare operation. Please explain "properly designed network setup".? – Geek Jun 18 '09 at 11:16 1 @Geek is your sqlite table indexed properly? – Sam Saffron?

Jun 18 '09 at 11:22.

To get high throughput you will need to batch inserts into a big transaction. I really doubt you could find any db that allows you to round trip 5000 times a second from your client. Sqlite can handle tons of inserts (25K per second in a tran) provided stuff is not too multithreaded and that stuff is batched.

Also, if structure correctly I see no reason why mysql or postgres would not support 5000 rows per second (provided the rows are not too fat). Both MySQL and Postgres are a lot more forgiving to having a larger amount of transactions.

I can't use the InMemory thing of SQLite – Geek Jun 18 '09 at 11:02 you don't have to ... those benchmarks are for data written to disk. – Sam Saffron? Jun 18 '09 at 11:06 I have used SQLite but Information retrieval in SQLite is way too slow once the DB hits 10 million rows.Do you think MySQL would be better?

– Geek Jun 18 '09 at 11:12 Yes MySQL scales better and handles multi threading better, just make sure you read up about innodb before starting any mysql work. – Sam Saffron? Jun 18 '09 at 11:16 -1.

OP states "I need to commit the row as soon as I receive it. " That's a transaction per row. SQLite performance will be terrible.

– finnw Jun 18 '09 at 11:24.

The performance you want is really not that hard to achieve, even on a "traditional" relational DBMS. If you look at the results for unclustered TPC-C (TPC-C is the de-facto standard benchmark for transaction processing) many systems can provide 10 times your requirements in an unclustered system. If you are going for cheap and solid you might want to check out DB2 Express-C.It is limited to two cores and two gigabytes of memory but that should be more than enough to satisfy your needs.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions