Citrix XenApp

Your Journey towards cloud.

Virtualization Picking up Speed

Are your Skills keeping up? Skill up. Be Relevant

Are you a System Admin

Learn Citrix XenApp, Its future.

Citrix XenApp

Industry-leading virtualization platform for building cloud.

Cloud Computing in Demand

Learn how to build cloud on Citrix XenApp.

Thursday, 8 December 2011

Function of the Local Host Cache

Each XenApp server stores a subset of the data store in the Local Host Cache (LHC). The LHC performs two primary functions:

• Permits a server to function in the absence of a connection to the data store.
• Improves performance by caching information used by ICA Clients for enumeration and application resolution.
The LHC is an Access database, Imalhc.mdb, stored, by default, in the <ProgramFiles>\Citrix\Independent Management Architecture folder.
The following information is contained in the local host cache:
• All servers in the farm, and their basic information.
• All applications published within the farm and their properties.
• All Windows network domain trust relationships within the farm.


• All information specific to itself. (product code, SNMP settings, licensing information)
On the first startup of the member server, the LHC is populated with a subset of information from the data store. From then on, the IMA service is responsible for keeping the LHC synchronized with the data store. The IMA service performs this task through change notifications and periodic polling of the data store.
If the data store is unreachable, the LHC contains enough information about the farm to allow normal operations for an indefinite period of time, if necessary. However, no new static information can be published, or added to the farm, until the farm data store is reachable and operational again.

Note: Prior to Presentation Server 3.0, the LHC had a grace period of only 96 hours; this was due to farm licensing information being kept on the data store. Once the 96 hour grace period was up, the licensing subsystem would fail to verify licensing, and the server would stop accepting incoming connections.
Because the LHC holds a copy of the published applications and Windows domain trust relationships, ICA Client application enumeration requests can be resolved locally by the LHC. This provides a faster response to the ICA Client for application enumerations because the local server does not have to contact other member servers or the zone data collector. The member server must still contact the zone data collector for load management resolutions.
In some instances it can be necessary to either refresh or recreate the local host cache. The sections below describe these situations.

Refreshing the Local Host Cache

If the IMA service is currently running, but published applications do not appear correctly in ICA Client application browsing, force a manual refresh of the local host cache by executing dsmaint refreshlhcfrom a command prompt on the affected server. This action forces the local host cache to read all changes immediately from the data store.
A discrepancy in the local host cache occurs only if the IMA service on a server misses a change event and is not synchronized correctly with the data store.

Recreating the Local Host Cache
If the IMA service does not start, the cause may be a corrupt LHC.
If you have made extensive changes to the farm data store, such as publishing various applications, adding or removing servers from the farm, or creating new policies.
If you must clean the farm data store, using the DSCHECK utility, you should then rebuild the LHC on each of the servers in your farm, once the data store has been cleaned.

Steps to recreate the Local Host Cache

IMPORTANT: The data store server must be available for dsmaint recreatelhc to work. If the data store is not available, the IMA service cannot start.
1. Stop the IMA service on the XenApp server, if it is started. This can be done using the command: net stop imaservice, or from services.
2. Run dsmaint recreatelhc, which renames the existing LHC database, creates a new database, and modifies the following registry key HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\IMA\Runtime\PSRequired key to 1. Setting the value PSRequired to 1 forces the server to establish communication with the data store in order to populate the Local Host Cache database. When the IMA service is restarted, the LHC is recreated with the current data from the data store.
3. Restart the IMA service. This can be done via the command line, net start imaservice, or from services.
Note: For XenApp 6 or later the registry key path is HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Citrix\IMA\ RUNTIME\PSRequired to 1.
There is also an available built-in utility to check the Local Host Cache called LHCTestACLsUtil.exe file located in C:\Program Files (x86)\Citrix\System32 of the XenApp server. To run this utility, you must have local administrator privileges.

Wednesday, 7 December 2011

How to Hide the Messages Button in Web Interface 5.4

This article describes how to hide the Messages button on the Web Interface toolbar.
Note: Use this procedure carefully. Be aware that the Messages button provides useful information to the end users.
Requirements
Web Interface 5.4
Procedure
Follow the steps below to hide the Messages button from Web Interface 5.4 toolbar:

  1. On the Web Interface server, go to C:\Inetpub\wwwroot\Citrix\<site name>\app_data\include\ folder and look for a file called header.inc.

  2. Go to around line #70, and comment out the following line by putting <!-- and -->:
Before:
scription: cid:image001.png@01CC00EB.1D5EAC00
After:
scription: cid:image003.png@01CC00EB.63FAB0F0

  1. Save the file.

  2. Run IISRESET on the Web Interface server.

  3. Clean up your browser history on the client workstation and test.

Tuesday, 6 December 2011

Force Terminal Services Clients to Disconnect when Idle

When you administrate servers running Windows Server 2000 or 2003, one of the most frustrating experiences is when sessions get cut off but the server still thinks they are active. You’ll get this error message, which you are sure to encounter at some point:

The terminal server has exceeded the maximum number of allowed connections.

You can help prevent this from happening by setting a policy on the server to automatically disconnect when idle. To change this setting, go to Administrative Tools \ Terminal Services Configuration.

image

Click on Connections in the left hand pane, and then right click RDP-Tcp and select Properties. In the resulting window select the Sessions tab.

Check the boxes for “Override user settings” and change the idle session limit to something reasonable, like an hour. You can set it lower if you’d like.
Change the radio button to “Disconnect from session” when session limit is reached. This will make all sessions automatically mark as disconnected on the server. The session will be saved exactly as it was, but the server will mark it as disconnected so that you can log back into the session again.
Cannot Suppress the Idle Timer Expired Dialog Box in an ICA Session  
When an idle session limit is set in Terminal Services, users are presented with the following idle timeout dialog box stating that sessions will be disconnected in two minutes if they are left idle for the specified amount of time.



Idle timeout message:
"Session has been idle over its time limit. It will be disconnected in 2 minutes. Press any key now to continue session."
The idle session timeout logic is part of Microsoft's Terminal Services architecture and is hard-coded to two (2) minutes. Currently there is no way to change this value or disable the notification.

Monday, 5 December 2011

Data Collector and its features

Data Collector:


Every Zone in a MetaFrame farm consists of one data collector. If a new MetaFrame server joins the zone or the current data collector is unavailable then an election is triggered to determine a data collector. When a zone elects a new data collector, it uses a preference ranking of the servers in the zone. Each zone has four levels of preference for election of data collectors and every server is assigned a preference level. The preference levels, in order from highest to lowest preference, are:
       Most Preferred
       Preferred
       Default Preference
       Not Preferred
By default, a MetaFrame server is set to the Default Preference setting. This is the case for all server except the for the first server added to the zone, in which it is set to Most Preferred and it is set to be the zones initial data collector.

To designate a specific server as the data collector you will want to set the election preference for the server to Most Preferred and all other servers to something lower.
If you do not want a server to be a data collector then you will need to set the preference to Not Preferred.
If you create a new zone, the first server that you move to the new zone becomes the zones data collector, and its preference level is set to Most Preferred.
Note: As discussed in the design phase, when the server farm exceeds five or more servers or it is experiencing high session traffic, you can reduce the possibility of data collector performance issues and sluggish farm enumeration by setting up MetaFrame XP server as a data collector or what I like to call a Control Server. This server will be dedicated for acting as the data collector so you will NOT want to publish any applications to it.


New Data Collector Election Process


When a communication failure occurs between a member server and the data collector for its zone or between data collectors, the election process begins in the zone. Here are some examples of how ZDC elections can be triggered and a high level of summary of the election process. A detailed description of this process and the associated functions used is further below in this document.
1. The existing data collector for Zone 1 has an unplanned failure for some reason, that is, a RAID controller fails causing the server to blue screen. If the server is shutdown gracefully, it triggers the election process before going down.
2. The servers in the zone recognize the data collector has gone down and start the election process.
3. The member servers in the zone then send all of their information to the new data collector for the zone. This is a function of the number each server has of sessions, disconnected session and applications.
4. In turn the new data collector replicates this information to all other data collectors in the farm.
Important: The data collector election process is not dependent on the data store.
Note: If the data collector goes down, sessions connected to other servers in the farm are unaffected.
Misconception: “If a data collector goes down, there is a single point of failure.”
Actual: The data collector election process is triggered automatically without administrative intervention. Existing as well as incoming users are not affected by the election process, as a new data collector is elected almost instantaneously. Data collector elections are not dependent on the data store.


Detailed Election Process:


As we know, each server in the zone has a ranking that is assigned to it. This ranking is configurable such that the servers in a zone can be ranked by an administrator in terms of which server is most desired to serve as the zone master. “Ties” between servers with the same administrative ranking are broken by using the HOST IDs assigned to the servers; the higher the host ID, the higher-ranked the host.
The process that occurs when an election situation begins is as follows:
1. When a server comes on-line, or fails to contact the previously-elected zone master, it starts an election by sending an ELECT_MASTER message to each of the hosts in the zone that are ranked higher than it.
2. When a server receives an ELECT_MASTER message, it replies to the sender with an ELECT_MASTER_ACK message. This ACK informs the sender that the receiving host will take over the responsibility of electing a new master. If the receiving host is not already in an election, it will continue the election by sending an ELECT_MASTER message to all of the hosts that are ranked higher than itself.
3. If a server does not receive any ELECT_MASTER_ACK messages from the higher-ranked hosts to which it sent ELECT_MASTER, it will assume that it is the highest ranked host that is alive, and will then send a DECLARE_MASTER message to all other hosts in the zone.
4. When a server that has previously sent an ELECT_MASTER message to the higher-ranked host(s) in the zone receives an ELECT_MASTER_ACK from at least one of those hosts, it enters a wait state, waiting for the receipt of a DECLARE_MASTER from another host. If a configurable timeout expires before this DECLARE_MASTER is received, the host will increase its timeout and begin the election again.
At the conclusion of the election, each host will have received a DECLARE_MASTER message from the new zone master.

VMXNET 3: Supported Guest Operating Systems

VMXNET 3 is the newest NIC driver for VMs (requries VM HW v7). It should be chosen as default for all supported guest operating systems. Windows Server 2000, however, is not supported. Here's link to VMware KB with more info. Remember, that when you delete the old NIC and add a new one, then all IP info is wiped and should be reconfigured (mostly relevant for static IPs).


Sunday, 4 December 2011

Farm Metrics Server

The Farm Metric Server is a MetaFrame CPS(Citrix Presentation Server) server that is selected to correlate all farm metrics and server events as it relates to the entire farm. Currently, the only farm metric that is available is the application count for applications. Valid server events that are identified by the Farm Metric Server are server down, server up, metric green, metric red and metric yellow. Every time a server goes down or comes up, the Farm Metric Server is notified. The Farm Metric Server is notified anytime that a server metric changes thresholds.
The Farm Metric Server is also considered a CPS member server. It has the same responsibilities to the summary database functionality as other member servers. The only difference is that it also has the responsibility of the tasks associated with a Farm Metric Server.

On XenApp 5 it's possible to configure farm metric servers in resource manager (xenapp advanced configuration console).In Xenapp 6, we can find this feature in Edge sight

Saturday, 3 December 2011

Cyclic Boot Presentation Servers

When the IMA Service starts after restarting the server, it establishes a connection to the data store and performs various reads to update the local host cache. 
These reads can vary from a few hundred kilobytes of data to several megabytes of data, depending on the size and configuration of the farm.
To reduce the load on the data store and to reduce the IMA Service start time, Citrix recommends maintaining cycle boot groups of no more than 100 servers. 
In large farms with hundreds of servers, or when the database hardware is not sufficient, restart servers in groups of approximately 50, with at least a 10 minute interval between groups.
 
Note: If the Service Control Manager reports that the IMA Service could not be started during a cycle boot of a Presentation Server but the service eventually starts, ignore this message. 
The Service Control Manager has a time-out of six minutes. 
The IMA Service can take longer than six minutes to start because the load on the database exceeds the capabilities of the database hardware. 
To eliminate this message, try rebooting fewer servers at one time.