- Feb 05, 2014
-
-
Jared Hancock authored
-
Jared Hancock authored
-
- Feb 04, 2014
-
-
Jared Hancock authored
-
Jared Hancock authored
-
Jared Hancock authored
And allow plugin to support remotely-generated hashes for migration safety verification
-
- Jan 29, 2014
-
-
Jared Hancock authored
-
Jared Hancock authored
If a file is attached via email and sent into the system, and a file is on record with the same signature (hash) and size, the system will not save the file. Instead, the key of the existing file would be found and used instead. This patch fixes a bug in AttachmentFile::save. The key was generated for the new file; however, if it was determined to be a duplicate, the key of the existing file was not returned. Therefore generated key, which wasn't saved to the database, was returned. Therefore, the wrong key was placed in the body of the message with cid:<key> for inline images, although that key would not exist in the database. This patch correctly returns the existing key from the ::save() method for de-duplicated files.
-
- Jan 20, 2014
-
-
Jared Hancock authored
Allows for backend listing, file listing, single file dump and file migration.
-
Jared Hancock authored
-
- Jan 18, 2014
-
-
Jared Hancock authored
Fixes the rewriting of the `key` field in the ticket thread body. The storage de-duplication system may replace the `key` value with an existing one. The ticket thread system will use the `key` value assigned when the file is committed to the database or the existing key of the duplicate file. Also fixup installation issues with the attachment storage plugin architecture
-
Jared Hancock authored
-
Jared Hancock authored
* Include a `bk` column to store the storage backend * Include a `signature` column which represents a repeatable hash of the file contents * Rename `hash` to `key` since it isn't a real hash
-
Jared Hancock authored
-
Jared Hancock authored
-
Jared Hancock authored
-
- Oct 29, 2013
-
-
Jared Hancock authored
Move the cacheable code to the Http class and allow the client configuration to be cached in the browser
-
- Oct 25, 2013
-
-
Peter Rotich authored
-
- Oct 17, 2013
-
-
Jared Hancock authored
Previous explain included a nested sub-select, which exponentially increased the count of the rows to be examined. This patch eliminates one layer of nesting on the sub-select and dramatically increases the performance finding orphaned files. Fixes #773 References: http://www.osticket.com/forums/forum/osticket-1-7-latest-release/troubleshooting-and-problems-aa/9446-version-1-7-much-slower-than-1-6
-
- Oct 14, 2013
-
-
Jared Hancock authored
Including adding support to the TCL uninitialized variable reader to ignore class-static variable access as well as detect inline functions and their closure arguments.
-
- Oct 09, 2013
-
-
Jared Hancock authored
Process inline attachments in thread entry and support inline images in piped emails Support inline images across the system, with draft support Migrate to a single attachment table That way we don't need a new table for everything we need to attach an inline image to (like a signature, for instance) Add richtext support for internal notes Implement images on site pages * Image paste in Redactor * Make non-local images optional * Placeholder for non-local images * Fix local image download hover * Don't re-attach inline images
-
- Oct 06, 2013
-
-
Jared Hancock authored
When scanning the file_chunk table for orphaned file chunks that can be deleted, apparently, MySQL will read (at least part of) the blob data from the disk. For databases with lots of large attachments, this can take considerable time. Considering that it is triggered from the autocron and will run everytime the cron is run, the database will spend considerable time scanning for rows to be cleaned. This patch changes the orphan cleanup into two phases. The first will search just for the pk's of file chunks to be deleted. If any are found, then the chunks are deleted by the file_id and chunk_id, which is the primary key of the table. The SELECT query seems to run at least 20 times faster than the delete statement, and DELETEing against the primary key of the blob table should be the fastest possible operation. Somehow, both queries required a full table scan; however, because the SELECT statement is explictly only interested in two fields, it is more clear to the query optimizer that the blob data should not be scanned. References: http://stackoverflow.com/q/9511476
-
- Sep 05, 2013
-
-
Jared Hancock authored
Previously, filenames saved in the database had the spaces changed for underbars; however, other characters (such as commas and non-ascii characters) presented issues with user agents downloading the attachments. This patch handles the filename encoding for two special cases -- internet explorer and safari, and provides the semi-standard RFC5987 method of encoding the filename for the remaining browsers. Attachments are no longer forced to be downloaded. It is up to the browser to decide if the attachment should be shown in the browser or downloaded. This patch also fixes a slight bug in the caching mechanism for downloads concerning the last-modified time. The date sent to the browser was not properly converted to GMT time, although the server claimed that it was.
-
Jared Hancock authored
Previously, filenames saved in the database had the spaces changed for underbars; however, other characters (such as commas and non-ascii characters) presented issues with user agents downloading the attachments. This patch handles the filename encoding for two special cases -- internet explorer and safari, and provides the semi-standard RFC5987 method of encoding the filename for the remaining browsers. Attachments are no longer forced to be downloaded. It is up to the browser to decide if the attachment should be shown in the browser or downloaded. This patch also fixes a slight bug in the caching mechanism for downloads concerning the last-modified time. The date sent to the browser was not properly converted to GMT time, although the server claimed that it was.
-
- Jul 25, 2013
-
-
Jared Hancock authored
The "ft" field does not exist when the attachment migration takes place. Therefore, attachment migration will fail because the records cannot be inserted into the database due to the missing "ft" field.
-
- Jul 17, 2013
-
-
Jared Hancock authored
Administrators are allowed to upload one or more logos and then select from the uploaded logos to set one for the client site. Logos can also be deleted on settings->pages submission
-
- Jun 26, 2013
-
-
Jared Hancock authored
Looks like a long standing, yet-to-be-fixed PHP bug, where zlib.output_compression can result in the output buffer not completely being flushed. This is especially critical for downloads where the tail of the file might be lost. https://bugs.php.net/bug.php?id=19436
-
- Mar 04, 2013
-
-
Peter Rotich authored
-
- Feb 19, 2013
-
-
Peter Rotich authored
-
- Dec 06, 2012
-
-
Peter Rotich authored
-
- Oct 15, 2012
-
-
Peter Rotich authored
-
- Sep 18, 2012
-
-
Jared Hancock authored
-
Jared Hancock authored
-
- Sep 14, 2012
-
-
Jared Hancock authored
This will remove the upper limit of BLOB sizes imposed by MySQL with the max_allowed_packet setting completely. This adds a new table %file_chunk which will contain the filedata in smaller chunks (256kB). It also includes a new class, AttachmentChunkedData, which will handle reading and writing the data, abstracting away the chunks. This is done by migrating data from the %file table to the %file_chunk table. One must beware that this must safely (the migration that is) plug into the both the live osTicket developers as well as the users doing a full upgrade from osTicket-1.6*. For this, the AttachmentFile::save method was patched to use the new AttachmentChunkedData class to write the attachment data to the database in chunks. That is, the migrater will use the new code on the major upgrade and bypass the filedata column of the %file table altogether. Therefore, the patch associated with this commit will not migrate any data for the major upgrade. For developers doing incremental upgrades, the patch included in this commit will transfer the data from the %file data to the new %file_chunk table by chunking it. As written, only the first 16MB of the attachment is migrated. This could easily be adjusted, but it seems like a reasonable limit for now.
-
- Sep 07, 2012
-
-
Jared Hancock authored
MySQL is kind enough to quietly truncate the filedata field when attempting to CONCAT data beyond the size of max_allowed_packet. The simplest fix is to automatically adjust the max_allowed_packet to the size of the file being uploaded plus some extra. See MySQL bugs #22853, #34782, and #63919 for more discussion on the issue. The max_allowed_packet variable default to 1M, but is expandable to 1G. Therefore, the fixed limit of attachments for osTicket will be 1G, since it would be impossible for MySQL to append data after that mark. *Sigh*
-
- Sep 05, 2012
-
-
Jared Hancock authored
MySQL has a limit on the maximum amount that can be transferred in one statement. It's the max_allowed_packet setting. The value of this setting will be the approximate upper limit of attachments that can be handled by the database given the current access model for osTicket. The issue came up for attachment uploads and was corrected, so that uploads are chunk inserted into the database. Downloads, however, were forgotten. Strangely, it took quite a bit of debugging to track down the problem. This patch corrects attachment downloads by fetching 256kB chunks of the attachment at a time and sending them directly to the client. This will also overcome PHP's memory limit which would be the second-level blocker of attachment sizes. Lastly, the AttachmentFile::getData() method is simulated using output buffering. This will provide the same access as the previous getData() method; however, it is still subject ot PHP's memory limits.
-
- Aug 13, 2012
-
-
Jared Hancock authored
-
- Jul 25, 2012
-
-
Peter Rotich authored
-
- Jun 28, 2012
-
-
Jared Hancock authored
This overcomes the eventual limit of and database to support queries of a finite length. We now split the file contents into 100k chunks and append the chunks to the database one chunk at a time.
-
- Mar 23, 2012
-
-
Peter Rotich authored
-
- Mar 21, 2012
-
-
Jared Hancock authored
Also add a cron job to perform the task on demand Change the implementation for Ticket::deleteAttachments to call this method after simply removing all entries from the %ticket_attachment table
-