Jamit Software Forum
Welcome, Guest. Please login or register.
September 16, 2019, 03:00:54 pm

Login with username, password and session length
Search:     Advanced search
May 5th, 2010 - Jamit Software Launches the Market!
3,080 Posts in 791 Topics by 1,562,229 Members
Latest Member: Pouringzbv
* Home Help Search Login Register
+  Jamit Software Forum
|-+  Jamit Job Board Customers
| |-+  Configuration
| | |-+  Optimization techniques
« previous next »
Pages: [1] Print
Author Topic: Optimization techniques  (Read 4536 times)
fujipadam
Jammers
Sr. Member
*
Posts: 62


« on: June 19, 2009, 02:38:46 pm »

Fellow Users,

Lets start a thread on optimization of the script. The script seems to be very database heavy and can easilty overwhelm a server. What are the optimizations you use to reduce server load

1) What are the columns you index?
2) Are you disabling any part of the script to reduce resource utilization
3) Are making any part static and how are you accomplishing this?

Thanks,
Fuji
Logged
Adam
Administrator
Hero Member
*****
Posts: 112


« Reply #1 on: June 20, 2009, 01:33:31 am »

can you post a link to your site pls?

Things you can do:
-Turn on cache in Admin->Main Config
- Go to Admin->Database Tools and update your indexes

Other things to consider
- The more search fields on your forms, the more resources are needed.
- The more columns on your list, the more resources are needed
- Skill matrix: about 4 rows max would be optimal

You can also submit a copy of your database to support so that we can analyze your setup
Logged
Adam
Administrator
Hero Member
*****
Posts: 112


« Reply #2 on: June 22, 2009, 06:51:08 am »

Hi, the next version, 3.6.0 will re-vamp the current cache system Cache System, and introduce the possibility to use Memcached. This will significantly reduce database hits and also improve scalability. Let me know your thoughts, thanks!
Logged
fujipadam
Jammers
Sr. Member
*
Posts: 62


« Reply #3 on: June 22, 2009, 04:37:32 pm »

Hi Adam,

Thank for your response. memcached would be awesome since database load is the main reason in my site running slow. I have thrown a lot of resources at it and its acceptable now.

I have also changed the table engine to INNODB to prevent table level locking. This is improved the lock wait ratio dramatically. The down side is that MySQL full text search works only on MyISAM now (there are some open source alternatives that I might consider for search down the lane

some questions

1) Queries using text and blob fields dont use indexes and dont use table cache. I got around this in other scripts by splitting up the query into 2. the first query will do a select of primary key based on on the condition and the second query will get a select * from the table only for the primary keys selected from teh first query. Was wondering how it is handled in Jamit? For example do you do a Select postid where blah blah blah and then do a Select * where postid = postid from previous query

2) The more columns on your list, the more resources are needed --  you are referring to the display of jobs list right?

3) submit a copy of your database to support so that we can analyze your setup ---> what are the files you would need for that?


Thanks,
Fuji






Logged
Adam
Administrator
Hero Member
*****
Posts: 112


« Reply #4 on: June 23, 2009, 03:50:54 am »

MyIsam is generally faster than InnoDB
InnoDB is more reliable than MyIsam
InooDB does not support FullText
Jamit does not really rely on table locking, so it would better to use MyIsam.
Logged
screenmates
Jammers
Sr. Member
*
Posts: 81


« Reply #5 on: August 22, 2009, 06:02:30 pm »

We cannot expect to derive full search capabilities, faceting, speed, etc without using a search server. Once a search server is implemented, we don't have to worry about the size of the job table, resume table, etc - we can use InnoDb for reliability, we can search anyway we want (stemming, etc)... All we have to do is configure the search server and maintain just enough/current data inside it - just add/update/delete every record in-to/from the search server that is added/updated/deleted in-to/from the tables and set it to optimize itself occasionally. It can be setup (configuration through XML files) to handle a single table (default) or multi-table (using prefixes).

I suggest to implement an option to use Solr search server (based on Lucene) which is easy to setup/configure and a great product with a lot of support built around it. It is built on XML/HTTP and JSON APIs with hit highlighting, faceted search, caching, replication, a web administration interface and many more features. It runs in a Java servlet container such as Tomcat. It can be tested locally using Jetty. Here is a brief tutorial:

http://lucene.apache.org/solr/tutorial.html

To make things even easier, there is a Solr PHP Client written by Donovan:

http://code.google.com/p/solr-php-client/

Here is a simple example of Solr PHP Client:

http://code.google.com/p/solr-php-client/wiki/ExampleUsage

Solr's caching, cache warming behind the scenes (does its own caching and freeing us from this headache), load-balancing (yes, distributed Solr search), asynchronous processing, multiple cores (!), etc are at blazing speeds. It is being used by several large sites including CNet. On the other hand, I heard the lucene search from Zend framework is not ready for large sites - on large sites, it is over 2000 times slower than Solr Sad

Out and out, full-text MyISAM table search is a poor-man's search compared to a search server's capabilities and speed.
Logged
screenmates
Jammers
Sr. Member
*
Posts: 81


« Reply #6 on: August 22, 2009, 06:05:43 pm »

Wiki for "all things Solr":

http://wiki.apache.org/solr/FrontPage
Logged
Pages: [1] Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.022 seconds with 17 queries.