Login | Register
My pages Projects Community openCollabNet

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Catacomb] RE: storing large files in blob...

Knight, Lloyd wrote:

i have posted the original message to the catacomb mailing list
but haven't gotten any response yet.  i will post this response
there. maybe one of the principal developers there could address
how catacomb stores the data.????
The current main trunk of Catacomb stores resource data in a BLOB column type.

so,  if i follow all this, the simple answer to my question:
can i store files larger than 16M in a long blob?
is yes and no.
This is dependent on the MySQL implementation of BLOBs (version-, configuration-, and possibly OS-dependent.) Hence the Catacomb group can't answer this question. See the appropriate MySQL documentation for details of the limits of BLOBs.

yes, if the application "chains" the data together in the row=inode
method described below.
no, if the application "jams all the data into 1 row using hugeblob"
(i assume that is _long blob_)
No, Catacomb does not do this. Unless you have a particular need to store document data in the database (such as you have your Catacomb server on a separate machine and you want the file data alongside the RDBMS) you'll likely get much better performance going straight to a file in the fs. If you do have a need and you can avoid certain MySQL limits by striping the data across multiple rows, I'd recommend you take a crack at the sourcecode and provide some patches to the Catacomb mailing list.