SharePoint is very memory intensive. Always plan for fast network cards
and plenty of RAM! It is very important that the server achieve the
fastest response possible from the I/O subsystem. More and faster disks
or arrays provide sufficient I/O operations per second (IOPS) while
maintaining low latency and queuing on all disks.
In this post we will look at the below:
- Infrastructure Optimization
- Database Optimization
- Caching
- Server Optimization
- Page Optimization
- Other Considerations
Infrastructure Optimization:
Search in SharePoint is very memory intensive. It is often the first source of performance headaches.
The better the links to the database are optimized, the better the overall performance will be.
If all the servers (front-end servers and application servers) are
behind the same switch, the application servers that run search will be
going out through the switch each time an incremental crawl is kicked
off.
Below is an example diagram of an undesirable topography:
Try to separate front-end user traffic from back-end SQL traffic.
Front-end servers should talk to the SQL server(s) through one network
and application servers & AD talk to SQL server(s) though a
different network.
Below is a diagram of the desired topography:
Even better, set aside an index server in the farm and let it crawl
itself without having to take traffic across the front-end servers, as
shown below:
In addition to search, publishing sites require a lot of memory as the number of read operations will be 100-1000 times greater than write operations. ECM sites require more application servers as there will be a lot of back and forth talk between them.
Try to distribute your service applications across multiple application servers whenever possible.
Database Optimization:
Database Operations:
Different things in SharePoint have different effects on the databases.
Order of items by their impact (1 being the biggest killer whereas 10 has the least impact):
- Performance Point Services
- Search
- Content Query
- Security Trimming (SPSecurityTrimmedControl delegate)
- Collaboration
- Workflow
- Publishing
- Social
- Client Access
- Browsing
What is the impact of custom code? It depends on the quality of the code.
Database Size:
Even though Microsoft says that each content database can hold up to 4
TB, the recommended practical limit is 200 GB for easily manageable
backup and restores.
Analytics databases grow very quickly to very large sizes. Try to
isolate Analytics databases. Analytics reports can have significant
impact on CPU load.
Search uses multiple databases for its operations. It uses separate databases for crawl, properties and administration. Crawl databases can be extremely large. Crawl databases also have heavy transactional volumes. Try to isolate temp and crawl databases if possible.
Database Management:
Performing the below steps at the database might result in better performance:
- Manually configure auto-growth settings. The default auto-growth setting in SQL Server is 1 MB. Set it to 100MB / 200 MB depending on your environment. This allows the database to grow in larger chunks, which is more efficient since these databases tend to grow rapidly.
- Defragment database indexes regularly.
- Limit content DB size per site collection.
- Isolate transaction logs by writing them off onto separate disks.
- Enforce site collection quotas in Central Administration.
Caching:
Make sure that all the custom controls / web parts use caching.
I have build and contributed a framework for custom web parts out in Codeplex.
It is available here: http://asyncwebpartframewrk.codeplex.com/.
Web parts that are built using this framework will make use of caching and asynchronous loading of data into the web part after the page is loaded.
I have build and contributed a framework for custom web parts out in Codeplex.
It is available here: http://asyncwebpartframewrk.codeplex.com/.
Web parts that are built using this framework will make use of caching and asynchronous loading of data into the web part after the page is loaded.
SharePoint supports the following types of cache:
- BLOB Cache
- Output Cache
- Object Cache
- Branch Cache
BLOB Cache:
BLOB Cache/Disk-based caching controls caching for binary large objects
(BLOBs) such as image, sound, video, and some static content files like
CSS. Disk-based caching is fast. It eliminates the need for database
round trips. BLOBs are retrieved from the database once and stored on
the Web client. Further requests are served from the cache and trimmed
based on security.
BLOB Cache needs to be enabled from Web.Config. Make sure that there is enough space in the drive/server where blob cache is stored. It’s important to understand that BLOB cache is per-machine. So make sure that the BLOB cache settings are consistent across the whole farm. You don’t want one server with 1 GB of BLOB cache and another server with 4 GB of BLOB cache. You might see strange and inconsistency in performance if you don’t configure BLOB cache consistently.
By default, the disk-based BLOB cache is off and must be enabled on the
front-end Web server. In order to enable BLOB cache, locate the
Web.Config for the web application and edit it. The recommended approach
for making such changes in Web.Config file is through a feature
receiver or PowerShell by making use of SharePoint’s
SPWebConfigModification class.
In the Web.Config file, find the following line:
<BlobCache
location=""
path="\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$"
maxSize="10" enabled="false" />
In this line, change the location attribute to specify a directory that has enough space to accommodate the cache size.
We strongly recommend that you specify a directory that is not on the
same drive as where either the server operating system swap files or
server log files are stored.
To add or remove file types from the list of file types to be cached,
for the path attribute, modify the regular expression to include or
remove the appropriate file extension. If you add file extensions, make
sure to separate each file type with a pipe (|), as shown in this line
of code.
To change the size of the cache, type a new number for maxSize. The size
is expressed in gigabytes (GB), and 10 GB is the default. It is
recommended that you not set the cache size smaller than 10 GB. When you
set the cache size, make sure to specify a number large enough to
provide a buffer at least 20 percent bigger than the estimated size of
the content that will be stored in the cache.
To enable the BLOB cache, change the enabled attribute, from "false" to "true".
You can use an STSADM command to flush all BLOB caches associated with a
specified Web application on different Web front-end computers on the
farm:
stsadm –o setproperty –propertyname blobcacheflushcount –propertyvalue 11 –url http://mywebapp::port
Output Cache:
Output cache requires publishing infrastructure. It is specifically
geared more towards publishing portals. HTML pages are written into
memory and served from memory as opposed to serving them from databases.
Watch out for authoring experience! If output cache is set in an
authoring environment, authors may not see their changes until the cache
expires. Similarly, make sure to not output cache search results.
Object Cache:
Object Cache is used in custom code. This is especially useful when the
content that is served does not change quite often. It is extremely
important to make sure that you cache and serve data appropriately to
users based on their permissions when using object cache. You don’t want
a user with insufficient privileges to be able to access data that
he/she is not supposed to access just because the custom code has cached
the data incorrectly without giving enough consideration to the
permission levels.
Branch Cache:
Branch Cache is a feature within Windows 2008 R2 / Windows 7 Enterprise
& Ultimate that allows you to pull documents from the network and
cache them locally. Branch Cache improves performance of applications
that use HTTP, HTTPS, as well as SMB (the protocol used for shared
folders) and WebDav (an extension of HTTP). Because SharePoint uses
these protocols, it is able to take advantage of Branch Cache. Just
remember that clients must be using Windows 7 and Windows 2008 Server
for it to work.
In order to enable cache for a site collection, navigate to the site
collection settings page and then scroll down to the section “Site
Collection Administration”.
As highlighted in the above screenshot, you can configure Object & Output cache from SharePoint UI.
Enable Output Cache, Object Cache & Cache Profiles on each
SharePoint site collection. Enabling blob & output cache improves
the site performance by 50-60%.
Server Optimization
IIS Compression:
IIS Compression is turned on by default in Windows Server 2008. An
important thing to note is that it is just enabled but not configured.
IIS Compression takes all the objects in the site, compresses them and
delivers them as smaller packages to the clients. It can be configured
to be set at a level between 0 and 9. By default, it is set to 0 when it
is turned on. 9 means lot of pressure on CPU utilization. We recommend
it being set to 6/7/8/9 depending upon your hardware.
IIS Compression wont effect dynamic content such as web parts like
Content Query Web Part etc. Just like blob & output cache it can
only compress static content. Depending on the level of compression
set, page size will be reduced by 30% - 40%.
IIS Compression is only enabled but not configured. If you want to make
sure that it is enabled, fire open IIS Manager and choose a site. Click
on the button “Compression” as shown in the screenshot below:
IIS Compression needs to be configured through command prompt using the following scripts. (The below script sets the levels to 9)
%windir%\system32\inetsrv\appcmd.exe set config /section:httpCompression –[name=’gzip’].dynamicCompressionLevel:9
%windir%\system32\inetsrv\appcmd.exe set config /section:httpCompression –[name=’gzip’].staticCompressionLevel:9
Resource Throttling & Locks:
By default, resource throttling is set to 5000 items (You can have up to
20,000). SQL Server by default locks the table when it executes a
query that returns more than 5000 records from a record set. You can
change the 5000 item limit in Central Admin depending on your
environment and requirements, but this is not recommended.
Instead, consider enabling bit rate throttling. Bit rate throttling
controls download speeds of large objects like Flash, Silverlight,
videos, etc. by limiting the amount of bandwidth that can be used for
streaming.
To enable bit rate throttling:
Navigate to Central Admin > Application Management > Manage Web Applications
Choose the desired web application and click on the button “General
Settings” in the ribbon. In the drop down click on “Resource Throttling”
Enabling object model override allows custom code to be able to retrieve more than 5000 items at a time.
HTTP Request Throttling, which is on by default, monitors front-end server performance and in the event of HTTP request overload, rejects low priority requests when the threshold is reached. This is particularly useful for public facing web sites where there are more chances for DDOS kind of attacks.
Page Optimization:
Optimize Pages:
SharePoint pages contain lot of resources; these can include but are not limited to:
- JavaScript Files
- CSS Files
- Navigation Controls
- Menus
- Web Parts
- Custom Controls
- Ribbon Control
- Delegates
- SPSecurityTrimmedControls
- Publishing Fields
- Search Controls
- Hidden Controls
Customized pages (unghosted pages as they are called in earlier
versions) may be easy to develop but they are bad in performance.
However, The advantage with customized pages is that they can be created
and modified using SharePoint Designer. When a page is customized in
SharePoint and saved, it will no longer be served from file system.
Instead a copy of the page will be written into the database and from
there on whenever the page is requested, it will be retrieved
dynamically from the database.
Un-customized pages (ghosted pages) are always loaded from the file
system. Thus they take advantage of blog cache and output cache. Thus,
they load faster and perform better. There is a noticeable 10% - 30%
performance difference between customized and un-customized pages. In
order to create un-customized page layouts or master pages, you will
have to develop solutions using Visual Studio 2010 that will deploy them
to /_layouts/ folder in 14 hive.
In addition to pulling all the content from databases, all the customized pages will have to run through safe mode parser. Safe mode parser is a control that resides in each customized page and watches for inline scripts. Anything that comes out of the database has to run through the safe mode parser.
ASP.NET parses a page on first render and compiles it into an assembly. The safe mode parser does NOT compile pages. It is designed to interpretatively parse a page and create the object structure of the page. In the event inline server-side code is detected, the safe mode parser will not allow the page to render. Additionally, the only objects within the page (i.e. controls marked as runat=’server’) which can be instantiated are those items found in the SafeControls list in Web.Config.
Optimize Branding:
I advise you start building master pages from minimal.master. This is a
much cleaner starting point that will remove much of the unnecessary
items in standard master pages. Consolidate all the CSS &
JavaScript files and try to minify the JavaScript files so that the
browser does not have to make multiple requests for multiple CSS/JS
files. Additionally, Use image stitching on pages with lots of small
images to reduce the number of requests.
It is also recommended that all the resources such as style sheets, master pages, page layouts, images, JavaScript files, etc are stored on file system (i.e. /_layouts/ folder in 14 hive) not in the virtual file system within your site (Style Library, Publishing Images etc…). To achieve this, you will have to develop solutions using Visual Studio 2010 that will deploy them to /_layouts/ folder in 14 hive.
Also consider referencing files like jQuery.js, etc.. from Content Delivery Networks. Many of them allow you to link to their copies of the files. Often times, you can Use CDN—for example the Google AJAX Libraries content delivery network—to serve JQuery to your users directly from Google’s network of data centers. Doing so has several advantages over hosting JQuery on your server(s): decreased latency, increased parallelism, and better caching.
Just because a SharePoint list can hold millions of items does not mean it should. All user content in all lists throughout the entire site collection is stored in a single table in content database. Scary!! The more the number of items, the slower the site will be. Consider partitioning the data into multiple site collections.
Even though list view web parts are improved in SharePoint 2010 and are XSLT based, they are still bad on performance with large data sets. XSLT is not particularly fast when there are large datasets. That is due to the large amount of looping that it needs to do to the xml that is returned. So consider developing custom controls with good caching mechanisms if you need to render large datasets to your users.
Wake up SharePoint 2010 Every Day:
Avoid that COLD, SLOW request that baffles users. Use a simple application called SPWakeUp.exe found here that touches each site and site collection on your SharePoint server to rebuild the IIS cache. Use Windows Task Scheduler to run this application once a day usually at around 4:00 AM.
Asynchronously Load Web Parts / Controls:
If you have XSLT List View Web Parts / Search Results Web Parts for
displaying list data or aggregated data, SharePoint 2010 allows you to
load the web part data asynchronously after the page is loaded. To
enable asynchronous loading, edit the web part and in the editor part,
navigate to the AJAX Options section and check the option “Enable
Asynchronous Load”.
You can also enable asynchronous loading for Content Query Web Part as
well. Unfortunately it is not so straight forward. I have explained how
to do that in this post
Other Considerations:
List Definitions:
Plan list schemas, list and library limits well in advance. If a list
definition consists of more columns of a particular type than the
recommended number of columns, it results in row wrapping.
Column limits:
It is widely known that SharePoint Server 2010 data is stored in SQL
Server tables. To allow for the maximum number of possible columns in a
SharePoint list, SharePoint Server will create several rows in the
database when data will not fit on a single row. This is called row
wrapping.
Each time that a row is wrapped in SQL Server, an additional query load is put on the server when that item is queried because a SQL join must be included in the query. To prevent too much load, by default a maximum of six SQL Server rows are allowed for a SharePoint item. This limit leads to a particular limitation on the number of columns of each type that can be included in a SharePoint list. The article at this URL describes all the column limits: http://technet.microsoft.com/en-us/library/cc262787.aspx#Column
The row wrapping parameter can be increased beyond six, but this may result in too much load on the server. Performance testing is recommended before exceeding this limit. Be careful—SQL row wrapping can degrade the performance by 35%.
Developer Dashboard:
Developer dashboard provides metrics on object execution for individual
pages. Turning on developer dashboard can be done in code or with
PowerShell.
Wictor Wilen created a solution that can be used for configuring developer dashboard through UI. http://www.wictorwilen.se/Post/SharePoint-2010-Developer-Dashboard-configuration-feature.aspx
Timer Job Separation:
You can take some jobs and pin them to a content database which is
running in a different farm or a different environment. So that load is
off the server.
Tags: SharePoint Performance tuning | SharePoint Caching | SharePoint Page Performance optimization | SharePoint database optimization | SharePoint resource throttling
No comments:
Post a Comment