Tuning ViaWorks resource consumption

02 Aug 2016

ViaWorks is a resource hungry monster, it will chew up any CPU you throw at it and use it to crawl through your data repositories as fast as possible - generally this is a good thing during a full crawl where you want to add your data to the search index as fast as possible, however once you are done and you are in “daily operation”, then you might want to lessen ViaWork’s resource appetite.

Fortunately, this is rather easy to do by just tweaking a few DB values in the config.via_works_server_setting table.

Before you start, monitor the CPU usage in the task monitor as well as how many Postgres threads are currently in use (these should be reduced afterwards).

Open up PgAdmin3 (in ProgramFiles\VirtualWorks\ViaWorks\PostgreSQL\bin), then add these lines to the table config.via_works_server_setting:

via_works_server_setting_id server_id setting_name setting_value data_type description encrypted creation_date_utc last_modified_date
1 1 FetchIdlePollDelayMilliseconds 60000 int FetchIdlePollDelayMilliseconds FALSE 2016-01-01 2016-01-01
2 1 MaxCheckActiveFetchRequestsDelay 10000 int MaxCheckActiveFetchRequestsDelay FALSE 2016-01-01 2016-01-01
3 1 MinCheckActiveFetchRequestsDelay 9000 int MinCheckActiveFetchRequestsDelay FALSE 2016-01-01 2016-01-01

Make sure that the IDs of the rows are unique, next add the server_id (default is 1, if you’re using just one server).
If you want to check what the server ID is, then just check the table config.via_works_server.
Note that the date stamps should of course be changed to whatever the current date is.

If you have more than one ViaWorks server, just repeat the process using the server_id for the other server.

Your settings should now look like this (assuming only one server):

Settings screenshot

Finally, in order to update viaworks with your new settings, restart all ViaWorks fetch services then return to your task manager window. You should now see a significant reduction in CPU usage - when testing I have seen the CPU usage reduced from 80/90% on average to 10/20% on average.

comments powered by Disqus