Sites vs Zones in XenApp/XenDesktop 7.x – Design Considerations When Choosing Between The Two

Introduction

Zones, a key design element that administrators and architects have learned to love in XenApp 6.5 was reintroduced in Xenapp and XenDesktop 7.7 FMA architecture. Prior to 7.7, building multiple sites was generally recommended when spanning multiple data centers or regions but now customers  now have the option of leveraging Zones. While Zones is a potential option, it might not always be the right option based on your situation. In this post, my goal is to review basic concepts around Sites and Zones and dig into design considerations to help choose between the two.

Primer on Sites and Zones

Sites

A site is what you define when you deploy XenApp or XenDesktop under the FMA architecture. It acts as a logical boundary with all objects defined being part of that site. It is also an administrative boundary. Each site has one or more delivery controllers and requires its own site configuration database. A site always have one primary zone defined by default. Sites can span multiple data centers and regions but there are a number of factors that need to be taken into consideration and we will review these a little later.

Zones

Zones are defined within a site to keep applications and desktops close to the user location while also simplifying administration by leveraging a single instance of Studio, Director and configuration database regardless of the number of zones. With zones, users in remote regions can get to their resources without having to traverse the WAN.

There are two types of zones – Primary zones and Satellite zones. Primary zones typically have two or more controllers and have the site configuration database locally whereas satellite zones can have a single controller or more. While similar, zones in the new FMA architecture in 7.x is not the same as XenApp 6.5. For instance, the concept of a zone data collector no longer exists.

With the introduction of Zone preference in conjunction with Optimal Gateway Routing, users can be homed to a specific zone when accessing their apps and desktops based on predefined conditions and rules. This greatly improves the user experience. Disaster recovery can also be handled intelligently.

For detailed information on Zones and Zone preference I would recommend you review the official documentation. Carl Stalhood has a very good blog on this topic as well.

There is also a great overview of Zone Preference in the XenDesktop 7.11 Master Class starting at the 58 minute mark.

When to use Sites

While zones simplifies overall administrative overheard and potentially infrastructure requirements, leveraging sites is a more prudent choice in certain scenarios. Lets look into these:

Latency

Latency will impact user performance. Latency and concurrent user requests should be taken into consideration and tested before deciding to use zones. See the chart above for different scenarios tested. There are two great blogs, one by Chris Gilbert and another by William Charnell on how latency affects brokering performance from satellite zones in XA/XD 7.7 where they collect metrics under various latency conditions. Definitely worth a read. However these metrics have improved significantly in 7.11 and above. In fact, 250 ms latency, XenApp and XenDesktop 7.11 outperforms the 7.7 code at 90 ms. With 7.11 or later, users experience quicker brokering of resources, even with latency between a broker and the SQL server. The official citrix documentation covers latency and the impact on zones, registration storm impact and how this can be tuned in great detail.

Fault Domains

When we talk about large deployments with greater than 5000 users, it is best practice to break the environment down into smaller PODs. This helps split the enviroment into multiple fault domains such that when any of the pods are affected, only a small set of users are impacted if any. Even when all users connect in to a single datacenter, it is still beneficial to break the infrastructure down to multiple sites and PODs. Here are the slides from a great session at Synergy 2015 that covered the benefits of a POD based architecture. This blog is also worth a read.

Administrative Boundaries/Regulatory Compliance

For environments that require complete administrative isolation between different regions or business units, going with separate sites is recommended. While Role Based Access Control is available, it does not meet the needs of every customer. In addition I have worked with customers that have gone with multiple sites so as to isolate environments to meet compliance requirements such as PCI or regulated environments where upgrades are not as frequent.

While multiple sites requires additional infrastructure, the resources from the various PODs can be aggregated from a user access perspective. Monitoring and troubleshooting can also be simplified as Director can manage multiple sites. A number of the tasks can also be automated by leveraging script. Image management can be greatly simplified by leveraging PVS.

When to use Zones

When designing a XenApp/XenDesktop infrastructure for an environment with multiple datacenters with latency being a non factor (within acceptable limits), zones can certainly be an option. The number of users per satellite zone can play a factor when making that determination as discussed earlier. Fault tolerance should also be taken into account as all the zones share one common site configuration database and connectivity issues could impact all the users. The resources that users connect to can be controlled based on zone preference and failover. 

Using a combination of Sites and Zones is also an option. For instance if a customer environment is spread across the globe but also has multiple datacenters within each region, they could use Sites for each region and the leverage Zones for the datacenters within each region assuming low latency between the datacenters. This would help reduce the overall complexity and administrative overheard when compared to deploying a site per datacenter.

From The Field

Here is some feedback from Jason Samuel, one of our CTP‘s based on his experience.

“Most of my customers completed their migrations from 6.5 to 7.x when either zones weren’t available in FMA yet or was still new.  They went with a site per data center.  My bigger customers embraced localized pods within each datacenter itself.  This is often self contained pods built on HCI as the backend.  Application and image management is controlled through PowerShell scripts to help with administration of multiple sites.  Since these customers have been using this model for a few years now and it is a mature process for them, they continue with this approach.  My customers that are doing greenfield 7.x deployments are the ones that really consider zones vs. doing individual sites.”

Ryan Mcclure, Senior Architect at Citrix Systems had this to say: 

“So armed with this data and information, what should you do? Stick to multiple sites? Design with zones wherever possible? Some scenarios just beg for zones, while others are obvious use cases for sites/pods, but more commonly, both are technically viable and it is a matter of weighing the pros and cons. If your workload is mission critical and your deployment lives in one or two datacenters, multiple sites are probably a good option for you. They provide additional fault tolerance, shrink failure domains and increase flexibility during upgrades. If, on the other hand, you have a number of semi-well connected locations where application back-ends reside, one site per location may prove prohibitive from an administrative perspective. These sorts of deployments are where zones should really be considered. The combination of sites and zones also shouldn’t be overlooked. The geographic distribution cited above is one example, but sites and zones can also be combined to strike a balance between manageability and availability. Rather than all VDAs in a zone mapping to a single primary site, multiple primary sites can be deployed.

When the decision isn’t obvious, our most successful customers ask the same question:

“What are other customers in similar situations doing?”

The strategy around sites and zones definitely isn’t one size fits all, but up until now, most of our large enterprise customers have gravitated towards separate sites. Many do so based on their desire to shrink failure domains and minimize risk wherever possible. You may have even heard recommendations to skip zones because sites have been available longer in the FMA world. At the time, this recommendation may have made sense, but the IT space is as dynamic as ever and leading practices need to be updated with the times. Over the last few months, this trend around steering clear of zones has started to shift, and more customers are taking a hard look at how zones can help simplify environment management. In most scenarios, zones shouldn’t be viewed as a total replacement for sites, but if your deployment can be simplified and/or management streamlined by implementing zones where the make sense, now is the time to give them a good look.”

Final Thoughts

Zones in XenApp/XenDesktop 7.9+ is a welcome addition and offers greater flexibility when planning out deployments. However, it is not necessarily the solution for every use case as discussed above. Latency, number of users/location, concurrent logins etc need to be carefully considered before deciding whether to go with multiple sites or leverage zones instead.

 

 

 

Is Samsung Chromebook Plus The Perfect Chromebook?

Over the past couple of years I’ve been collecting a lot of chromebooks. As of the 13th of Feb, I now own 6, mostly Acer and Samsung devices. As much as I love the concept of a low cost, ultra portable and secure thin client with excellent battery life & then leveraging Citrix for my enterprise apps, it always felt like there was something missing. Some of the common complaints were display resolution, build quality, lack of offline access and lack of a good touch screen model under $500.

Needless to say I was extremely intrigued when Samsung announced the 12 inch Chromebook Plus and the price point. I pre-ordered the device and got mine earlier this week. My experience so far has been terrific. Lets look into why I feel this device is close to perfect.

Design

The Samsung Chromebook Pro is a  12.3-inch laptop that also converts into a tablet. It is powered by an OP1 Hexa-core (Dual A72, Quad A53) ARM processor with 4GB of RAM and 32GB of storage. It comes with two USB Type-C ports and a microSD slot. It has various display modes, very similar to the Lenovo Yoga. It has a full metal design that weighs just 2.4 pounds. It comes with a stylus that pops out of the right side of the system, letting you take notes with Google Keep and other apps and smart enough to recognize characters, allowing you to search through your handwritten notes afterwords.

Display Resolution

Resolution has been one of my biggest gripes with chromebooks so far. And boy does this device address that issue. The Chromebook pro comes with a quad HD (2400 x 1600) pixel screen made with Gorilla Glass 3. with a 3:2 aspect ratio. The high resolution means my Citrix VDI instance looks absolutely spectacular on this device. Lots of real estate too!

Battery Time

Based on my testing so far, the battery time of the Chromebook Plus is on par every other chromebook I own. I’m getting approximately 9-10 hrs. Keep in mind that the resolution for this device is also one of the best. So that the battery time extremely impressive.

Android Apps!

This to me is a GAME CHANGER!! As you know, Google announced support for Android apps on chromebooks last yr. The challenge was that just a handful of devices were actually supported, and even among the ones where it was supported there was only one that had a touch screen. Personally I believe Android app support is pointless if there is no touch screen. Thankfully the Chromebook plus does have one! The combination of android app support, great resolution and touch screen makes it the perfect device. I now have a number of key productivity apps, many of which I can use offline. Some of my favorites so far are Citrix Secure Mail, Secure Web, Sharefile (Enterprise File Share and Sync), Slack and Skype for Business to name a few.

Touch Screen

The touch screen is extremely responsive. No lags whatsoever. Works great in tablet mode. Also great when using Android apps. All chromebooks moving forward need to be touch enabled IMHO. You cannot effectively use Android apps without touch!

Stylus!

The Chromebook Plus comes with a pressure sensitive stylus that is on par with others like the Surfacebook. Is it perfect? No. But its quite good. I can totally see myself using this device to do a white board or sketch a design while I am at customers. Very handy!!

Final Thoughts

Today was my first day out on the road with just the chromebook pro. I honestly did not miss my XPS 13. I accessed my Citrix VDI instance the entire time and the experience has never been this good on any of the other chromebooks I own. I also used a number of android apps including Skype for Business, Sharefile, Secure Web and others. The combination of VDI, chrome browser and native mobile apps is quite amazing. I used the the system for around 5 hrs and did not run into any issues during that time.

At $449, this device is a steal! If you are looking for a chromebook today, this should be in the list of favs! If I were to change one thing, I would add more memory to this device. Android apps can eat up memory fast!

Kudos to Samsung for a job well done!

Citrix acquires Unidesk: Here’s why customers should care!

Application layering has been a hot topic in the End User Computing space, specially the last 24 months or so. Layering allows you to decouple applications or groups of applications form the underlying operating system thereby enabling you to manage them indepedently. There are quite a few players in this space including AppVolumes by VMware and FlexApp by Liquidware Labs and Citrix’s AppDisk to name a few. But there is no arguing that Unidesk has been around the longest and has the most mature and comprehensive solution.

With today’s announcement from Citrix around the acquisition of Unidesk, customers have even more flexibility in terms of how applications and workspaces are delivered to their end users whether the workloads are running on premises or in the cloud.

Before we get into the key benefits of Unidesk and why this acquisition adds tremendous value, its important to understand some of the challenges that Citrix customers face. A good place to start is this survey that Unidesk conducted.

The Problem At Hand

1. Image Management – Today both PVS and MCS customers have to maintain multiple images. Larger environments sometimes manage and maintain over 10 images on a day to day basis. One of the reasons for this is business units needing one off applications leading to various silos. The administrative overheard involved in maintaining the images sometimes leads to needing dedicated resources who solely focus on image updates, testing and deployment.

2. Pooled desktops and assigning layers at runtime – Most Citrix customers are forced to use persistent desktops for certain use cases today due to users needing different sets of applications. If there was a way to decouple applications from the OS and deliver applications at login dynamically based on user privileges, then the same pool of desktops can be used for multiple use cases thereby reducing infrastructure costs and operating costs.

3. As customers move workloads to the cloud, there are new challenges that surface when it comes to image management. These need to be addressed in order to reduce cost, improve performance and thereby increase cloud adoption.

4. Not every application can be delivered via XenApp. Some applications need to be installed locally. App-V has been an alternate technology that a number of customers use but many still like to have the ability to install these locally.

5. While AppDisk provided layering, there were various limitations including the inability to attach layers at run time and the inability to use layers with persistent desktops. Appdisk also lacks true version management and rollback.

How the Unidesk acquisition helps address these issues

1. Unidesk already has a large number of Citrix customers and tight integration with both XenApp andXenDesktop. They are a proven technology at scale, a preferred MS partner for application and image management, and well regarded in the partner community.

2. Unidesk has connectors for PVS and MCS thereby simplifying application delivery and eliminating the need to manage and maintain multiple images.

3. Unidesk provides flexibility in terms of how the layers are delivered either at pre-boot or  and dynamically delivering apps into running session hosts without reboot. Unidesk has a feature called Elastic Layering that allows for layers to be attached at run time. So in a XenApp environment for instance, since applications are attached at run time, different users groups can be assigned different applications while connecting to the same server. This eliminates the need for silos.

4. Application compatibility is no longer a concern as Unidesk supports layering applications that have drivers and system service dependency and even apps that run while users are logged out.

5. Unidesk supports layering for persistent desktops in addition to XenApp and pooled desktops thereby addressing every use case. Also persistent layers can be assigned to users even while using XenApp. This allows administrators to provide users a more cost effective VDI option to their end users with persistence based off of XenApp.

6. Full Lifecycle Management of layers across your environment with version control, rollback etc.

7. Unidesk’s approach to layering is fundamentally different. A layer is assigned per application. Administrators the have the ability to create a profile so to speak consisting of the various layers for a user group. These layers are then combined into a single vhd that is then attached at boot or at run time depending on the assignment. When compared to other layering solutions since the number of vhd’s mounted is minimized, performance is greatly improved and login times reduced.

7. Cloud adoption has increased steadily over the past couple of years and customers are more inclined than ever to start moving workloads to public clouds, especially MS Azure. The Azure connector from Unidesk simplifies image management in the cloud. Layered Images can be assigned to different Azure collections. In addition all image collections can be updated by patching the OS and app layers only once. The Unidesk applicance can also run in Azure and is available via the Azure Marketplace. When you combine Citrix Cloud with Unidesk, there is definitely a better story now to be told around deploying and managing VDI workloads in Azure.

Final Thoughts

The Unidesk acquisition along with our recent acquisition of Norkskale helps customers further reduce infrastructure costs while increasing operational efficiencies and guaranteeing the most optimal end user experience. For customers running VDI in cloud or considering the move, Unidesk is a great new addition and will simplify image management. Citrix’s position as the industry leader in End User Computing is further solidified.

 

 

How to enable Local Host Cache in XenApp/XenDesktop 7.12

Local Host Cache (LHC), which was a key feature of the IMA architecture in XenApp 6.5 and earlier was reintroduced for the first time in the FMA based XenApp/XenDesktop 7.12 release.   You can learn more about LHC in detail in my previous blog on the topic.

Prior to 7.12, users were able to access resources (with some caveats) while experiencing site database loss using a feature known as Connection Leasing. When upgrading to 7.12 from an earlier release with Connection Leasing enabled, LHC is disabled by default.

To enable LHC run the following powershell command on the upgraded broker.

Set-BrokerSite -LocalHostCacheEnabled $true -ConnectionLeasingEnabled $false

The above command, enables Local Host Cache and disables Connection Leasing.

The Get-Brokersite cmdlet provides the current state of Local Host Cache (whether its enabled or disabled)

To disable Local Host Cache and enable Connection Leasing, run the following command:

Set-BrokerSite -LocalHostCacheEnabled $false -ConnectionLeasingEnabled $true

XenApp/XenDesktop 7.12 Local Host Cache Explained

With the release of XenApp and XenDesktop 7.12 Citrix brought back one of the most requested features from the XenApp 6.x days – The Local Host Cache (LHC). For those of you new to this term, it essentially provided a way for users to connect to their XA/XD published resources while the SQL based database is down but keeping a local cache on the XenApp servers themselves. LHC now replaces Connection Leasing in 7.x as the primary mechanism to allow connection brokering operations when database connectivity to the site database is disrupted. In this post, my goal is to dig into the architecture of Local Host Cache in 7.12 and how it works.

Architecture:

lhc-architecture

 

The above diagram from Citrix Documentation shows the architectural components that make up the Local Host Cache. The feature is disabled out of the box when XA/XD 7.12 is installed. If you are upgrading from a previous version LHC will be disabled under certain conditions. See the table below for further details.

screenshot-2017-01-01-at-1-10-18-am

With LHC, users can connect to Apps and Desktops that they have previously not connected to. This was not possible with Connection Leasing where users could only connect to resources that they had previously connected to.

Every broker now has three new services. The primary broker service, the secondary broker service and the configuration sync service.

LHC sychronization during normal operation and central database connectivity is not affected

  • During normal operations, the primary broker service communicates with the site database while the secondary broker service remains idle. The CSS makes sure the local db on each of the controllers is synchronized periodially.
  • Primary broker service accepts connection requests from Storefront, then communicates with Site DB and provides users access to VDAs registered with the controller and that they request access to.
  • Every 2 minutes, a check is made to see if there have been any changes to the primary broker config.
  • If a change is detected, then the primary broker uses the Citrix Config Synchronizer Service (CSS) to copy configuration to a secondary broker. This is not an incremental copy but a full copy from the primary broker to the secondary broker.
  • Secondary broker then imports the configuration to a local SQL Server Express database on the controller.
  • Once the config is copied the CSS service confirms that the config on the secondary broker matches the config on the primary broker.
  • Local DB on the secondary broker is recreated each time a config change is detected on the primary broker (checked in 2 minute intervals)
  • Secondary broker runs as a Windows service called Citrix High Availability Service

What happens when there is an outage and database connectivity is lost

  • During an outage, the primary broker can no longer connect to the site database and stops accepting connections.
  • Primary broker instructs secondary broker to start listening for and processing connection requests. An election process ensues to determine which controller takes over the secondary broker role. There can only be one secondary broker accepting connections during a site db outage.
  • When the VDAs start communicating with the secondary broker, a re-registration process is triggered and the secondary broker gets current session information about the VDA.
  • During the outage period, the primary broker continues to monitor the connection to the site database and when connectivity is restored, it instructs the secondary broker to stop listening for connections and the primary broker resumes brokering connections thereby restoring normal operations.
  • When a VDA communicates with the primary broker after it has taken over brokering, a re-registration is triggered.
  • The secondary broker removes all VDA registration info during the outage and continues checking for config updates on the principal broker every 2 minutes and updating its LHC when changes are detected.
  • If an outage occurs during an LHC sychronization, the current import is discarded and the last successful imported config is used.
  • It is important to note that during an outage, only one active secondary broker is available. So from a scalability perspective this could be a limitation. The secondary broker as mentioned earlier is chosen based on an election mechanism.

Local Host Cache and Citrix Cloud

  • If you are currently leveraging Citrix Cloud for your XA/XD control plane, the LHC functionality ensures that connectivity loss to the control plane does not impact users from accessing their resources.
  • LHC synchronization occurs the same way as it would in an on premises XA/XD deployment and the config changes are synchronized from the Citrix cloud via the Cloud connector.
  • To provide fault tolerance when connectivity to the Citrix cloud is lost altogether due to a WAN link failure, Citrix Storefront and potentially Netscaler would need to be on premises.

Local Host Cache Restrictions

  • You cannot run Studio and Powershell Cmdlets when LHC is active and site database connectivity is down.
  • Site configuration changes cannot be made when the connectivity to the central database is unavailable. This is very similar to the IMA based LHC implementation in XenApp 6.x
  • New machines cannot be provisioned as hypervisor interaction is not possible when LHC is operational.
  • Users cannot be assigned new resources during the site database connectivity outage.
  • Machines with a “Shut down after use” configuration will be placed in maintenance mode when LHC is operational

Troubleshooting

The two main tools to troubleshoot LHC are the Windows Event Logs and CDF traces.

  • The Config Sync Service logs events in the Windows Event logs in relation to LHC synchronization. If no config changes occur during the 2 minute intervals, no events are logged. If CSS receives a config change, the event is logged with event id 503. If the update to the secondary broker is successful, the event is logged with event id 504. If the update fails, the event is logged with event id 505
  • When the secondary broker takes over during an outage, event log entries are made indicating that the Citrix High Availability Service has started handling brokering. Once services are restored, you would see logs indicating that the Citrix High Availability service has stopped brokering. There will also be events related to secondary broker election. Event IDs include 3502, 3503,3504 and 3505. When Citrix Cloud is in play, XA/XD proxy log events are present. CDF traces can also be used for advanced troubleshooting.

Enabling Local Host Cache After Upgrading

Local Host Cache is not enabled by default when upgrading from an earlier version of XenApp and XenDesktop 7.x. I have written a blog on how to enable LHC after an upgrade.

Getting Started with the Citrix HDX Pi – A step by step walkthrough

1463594298798

A few months back, I wrote a blog on how to configure the Raspberry Pi thin client to access Citrix workloads. If you are completely new to the HDX Pi and want to learn more about the benefits, this is a good place to start. Since then Citrix announced the HDX Pi and I have received requests from members of the community to blog on configuring the HDX Pi. So here it is!

What you need:

  • One or more HDX Pi’s ( Microcenter edition)
  • ThinLInx Managment Software

Configuration

The HDX Pi comes pre licensed for the ThinLinx Management Software (TMS). So you can go to the ThinLinx website and download TMS and install on a windows PC. Once installed, run TMS.

Connect the HDX Pi to the network in addition to the obvious (keyboard, mouse, display). Once the Pi boots up, you will see the client within TMS.

8-5-2016 4-23-57 PM

 

8-5-2016 4-24-25 PM

 

8-5-2016 4-24-44 PM

You can now update a number of parameters and push files to the device within TMS

  • Change the name
  • Change protocol to HDX if you prefer
  • Push SSL certs if needed (If you are using private certs on Storefront for instance)
  • Change network parameters (if you dont want to use DHCP for instance or use a custom DNS server)
  • Change display parameters.

8-5-2016 4-25-05 PM

 

8-5-2016 4-25-33 PM

 

 

8-5-2016 4-26-23 PM

TMS is also how you would push new firmware to the device.

Once you are done with the configuration changes, reboot the device. Once rebooted, you should see the updated parameters within TMS.

8-5-2016 4-26-43 PM

 

Once rebooted, you will have to specify the URL that you want the Pi to connect to. This is your Netscaler Gateway URL.

After you enter the URL, you will be prompted for credentials.

Once authenticated by the Netscaler, you get prompted to pick the Store after which you see your applications and desktops.

Some Caveats to keep in mind

One catch with TMS today is that the URL does not persist unless you save it at the Pi itself. To do this, while at the storefront screen, use the Ctrl+Alt+C key combination and hit “Save Settings”. Now reboot. The HDX Pi will now authenticate and take you right to your apps once rebooted.

The TMS server will only discover devices on the same subnet. So make sure that your TMS server and Pi are on the same subnet will configuring the devices or else discovery will fail.

Viewsonic version of the HDX Pi is also available. However the configuration procedure is a little different and will be covered in a future blog post.

Once the configuration URL is saved, as mentioned earlier the device will boot straight into storefront using credentials provided initially. In order to configure a new store, you can clear config and reset to default on the device or you can factory reset the device via TMS.

Keyboard Shortcuts:

  • ctrl alt r twice to factory reset
  • alt f4 to exit HDX screen
  • ctrl alt v – volume
  • ctrl alt c – config screen
  • ctrl alt t – terminal

To learn more about performance check my previous blog. I look forward to your feedback!

 

LUMA SURROUND WIFI SYSTEM – MY INITIAL THOUGHTS

20160725_150333

Back in February, I noticed Luma on Kickstarter. What made it compelling to me was the concept of a mesh network, which could in theory remove all deadspots around the house and guarantee excellent network throughput everywhere.

20160728_093516

 

 

 

As most engineers out there, I have a large number of wireless devices throughout the house and consistent throughput has always been a concern. In addition Luma promised some interesting security features including the ability to link users to devices on the network and then apply parental controls on a per user basis. The product also promised to proactively monitor the security posture of connected devices (done through a cloud based service). So for these reasons I pre ordered a 3 pack and received my devices earlier this week. I finally hooked up the devices and wanted to share my initial thoughts.

20160725_151749

SETUP

Setting up the Luma is a breeze! You literally hook up one of the devices to your modem or router, download an app via Google Play or the Apple Store and it walks you through the entire setup step by step.  I have my Linkys 1900ac and Luma running in parallel, both hooked up to my modem. I use the Luma primarily for media streaming devices. One of the access points did not successfully configure the first time during setup and i had to go through the process again. The process was extremely simple and intuitive nonetheless. Luma is geared to those who have zero knowledge about networking. Even my mom would be able to get through the configuration process successfully.

Screenshot_20160728-093956

 


Screenshot_20160728-112553

Screenshot_20160728-112501

Screenshot_20160728-112456

Screenshot_20160728-092906

 

The tool recommends ideal locations to place the access points but following the recommendations, I noticed that the throughput was not optimal. I eventually ended up placing the access points as close to each other as possible on the three floors and that seemed to give the best throughput.

Screenshot_20160728-093204

THE GOOD

  • As discussed above, the setup is extremely intuitive and simple
  • The product will appeal to most consumers who are not tech savvy due to the simplicity of the setup and exceptional network coverage.
  • Linking devices to users and applying parental controls on a per user basis is an awesome feature that appeals to parents like myself.
  • You can completely eliminate deadspots throughout your house while maintaining consistent throughout everywhere
  • Network security scans that monitor the security posture of all your connected devices is a nice feature. However it is cloud based.
  • The iOS and Android apps are very well designed, although they are pretty limited in features as of now.

THE NOT SO GOOD

  • Zero configuration options from a network perspective besides setting up a WiFi network. Not even the most basic settings.
    • No DHCP configuration options (scope, reservations, lease time etc)
    • No port forwarding
    • No advanced firewall options
    • No QoS settings
  • Cannot configure multiple Wireless Networks (beyond the guest network). Cannot separate 2.5 and 5 Ghz networks
  • Requires you to setup a cloud based account and uses cloud based network scanning solution that cannot be turned off. This is a major concern for some.
  • From what I can tell, the Luma acts as a forward proxy and also handles DNS resolution which is NOT CONFIGURABLE. I can see why this is necesary to filter traffic and apply parental controls. However I noticed a significant delay in DNS resolution (up to a 5 second delays) when trying to resolve URLs. This is extremely unappealing and a major show stopper for me. I also assume that this data is also flowing through their cloud service which is concerning.
  • While handling streaming video, so far I am noticing periodic network drops and freezing which I never noticed with my Linksys 1900ac. I will need to investigate further.
  • The throughput offered by the first wired Luma that you setup is almost three times higher than the rest. I am getting 300 Mbps on the main Luma and only about a 100 on the other two Lumas. I had read some reviews that claim the same flaw. With that said, 100 Mbps is not bad 🙂
  • A bunch of features that were promised on Kickstarter have not made it into the initial release. However, the support team tells me that they are extremely aggressive about updates and are updating the mobile app twice a month and releasing firmware updates pretty aggressively. So I’m pretty certain that they’ll catch up.

THOUGHTS OVERALL

In addition to Luma, there are a few other products out there that offer a similar solution, Eero being the most popular. The real differentiator with Luma (at least in theory) were the advanced parental controls and the fact that the devices itself are much more beefy with quad core processors and such. While I am honestly a little disappointed with my initial experience, especially with regards to all the missing advanced network features, I am cautiously optimistic that these will be rolled into the product soon. So here are my initial thoughts:

  • If you are not tech savvy and want a really simple solution that just works, provides you consistent coverage everywhere and you dont care about advanced network features, then LUMA IS FOR YOU!
  • If cloud hosted services are a concern, then Luma is not for you.
  • If you already own a Linksys 1900ac or a similar advanced router, I would honestly hold back for now and wait to see how the product evolves. Too many gaps as of now.
  • If your main reason for looking at the solution was parental controls, it might be a fit however not all the features have been rolled out. You could look at Circle from Disney to complement your existing wireless solution. You could also take a closer look at Eero.

I will update this post as I have more information to share!

Citrix Appdisks How To Guide – Administration Basics and Gotchas

AppDisk, an application layering solution was part of XenApp/XenDesktop 7.8 released in late February this year. This post is not meant to cover the basics of application layering or image management as a whole. You can refer to my blog for a quick overview. My goal in this post is to cover the administrative aspects of application layering using Citrix AppDisks. With that said, lets dig right in!

Creating an AppDisk

There are couple of approaches to creating an AppDisk. The first method is to manually create it at the hypervisor level and then import it within studio. The second approach is to create and assign the AppDisk right from within Studio. You can read more about both approaches here.

To create an AppDisk from within Studio:

Click on the AppDisks node within Studio and then select “Create AppDisk” from the Actions menu.

3-29-2016 11-55-07 PM

On the next screen, select the size of the disk. There are predefined options of 3, 20 or 100 GB or you could pick a custom size. This is also where you would import an existing AppDisk that you have created manually. Keep in mind that on a 3GB AppDisk a good chunk of the space is already used up and you would most likely get less than 1 GB for any new applications you are looking to install into that layer. 

3-29-2016 11-55-53 PM

Next, you select the machine catalog you would use for the VM used to install applications into this AppDisk. Only the compatible options will be made available. For instance in the screenshot below, the only two options available are the NonPersistentVDI catalog and the Win 7 Pool. Reasons are provided as to why the remaining machine catalogs are not made available. Also worth noting that AppDisks can only be assigned random pool catalogs. The machine catalog should have at least one available VM for the AppDisk creation to work.

3-29-2016 11-58-49 PM

Next, give the AppDisk a name and the AppDisk creation process initiates. In my lab, I have seen anywhere from 10 minutes for a 3GB disk and under 20 minutes for a 20GB AppDisk (SATA storage). Creation of these disks on SSD storage was about 30% faster.

Once the AppDisk is created, you can install the required applications.

3-30-2016 11-33-18 AM

Installing Applications within an AppDisk

Within Studio, click on the newly created AppDisk. It should say “Ready to Install Applications”. Under the details section for the AppDisk, the preparation machine information is provided. Within the hypervisor management console, login to the preparation machine and install the required applications.

3-30-2016 11-33-48 AM

 

Once you have installed the applications, within studio, highlight the AppDisk and under the Actions pane, select “Seal AppDisk”. This starts the sealing process and once that is completed, you can run AppDNA compatibility analysis for that AppDisk.

3-30-2016 11-46-40 AM

 

 

3-30-2016 11-47-42 AM

 

Keep in mind that AppDisk Layering cannot be used for applications that have file system drivers and services.  AppDisk does not include application isolation. App-V or Turbo.net provides that functionality. 

Configuring AppDNA and Analyzing an AppDisk for Compatibility Issues

The main differentiator between AppDisk and the other layering technologies out there is the integration with AppDNA for Delivery group compatibility analysis. For instance, once we create an AppDisk we can test compatibility against multiple XenApp Images or a pooled Windows 10 delivery group as examples. This gives the administrator the assurance that the AppDisk is going to work with that delivery group without having to go through extensive regression testing. When you have multiple AppDisks assigned to a delivery group, the AppDNA compatibility analysis also makes sure that all the AppDisks play well together and reorders the AppDIsk assignment if need be based on the analysis. AppDNA integration is a XenApp/XenDesktop Platinum Only feature. 

Before you can run any compatibility analysis, AppDNA needs to be configured within Studio. Click on the AppDNA section under configuration and specify the AppDNA connection settings. Make sure the connection test passes.

3-30-2016 12-00-12 PM

Getting back to where we were in the AppDisk creation, we had just started the sealing process. Once this process is complete, the AppDNA compatibility analysis will automatically kick in if AppDNA connection settings are configured. The compatibility analysis is done against the machine catalog that the preparation machine belongs to. When you assign an AppDisk to a delivery group, compatibility analysis is carried out automatically against that delivery group. If there are multiple AppDisks assigned, then the AppDisks will be reordered if needed based on the analysis. There is an option to “Auto Order” the AppDisks when you assign an AppDisk to a delivery group. 

3-30-2016 11-53-32 AM

 

3-30-2016 12-00-00 PM

To view the report, click on “View Report” next to the AppDisk that you just sealed.

3-30-2016 12-01-10 PM

You can also view the reports from within the AppDNA console under the reports section. Here you have various views including the Application Issues, Application Actions, Issue View and Action View.

3-30-2016 12-58-45 PM

Assigning an AppDisk to a Delivery Group/Groups

To assign an application to a delivery group, click on Delivery Groups within Studio, highlight the Delivery Group that you want to assign the AppDisk to. Under the Actions pane, select “Manage AppDisks”.

3-30-2016 1-01-58 PM

The next screen shows you the currently assigned AppDisks and gives you the ability to add AppDisks. Once you assign your AppDisk, select Auto Order.

3-30-2016 1-02-12 PM

 

 

3-30-2016 1-02-28 PM

 

3-30-2016 1-02-48 PM

 

 

 

You can then select the rollout strategy. You can either reboot all the machines within that Delivery Group immediately or you can assign the AppDisk at the next machine reboot. You can then review the configuration and then click Finish. This initiates an AppDNA compability analysis if you have XenApp or XenDesktop Platinum entitlement and have configured your AppDNA server within Studio.

You can assign an AppDisk created with one OS to delivery groups running other OS’s as well so long as the application is compatible with the target OS. Within my lab, I tested assigning two AppDisks created with a Win2k12 preparation VM to a Win 7 random pool.

To assign an AppDisk to a delivery group, that delivery group needs to using the same storage. To assign an AppDisk to a delivery group on different storage, you would have to create a new VM at the hypervisor level tied to the target storage, clone and associate the AppDisk to the new VM and the reimport it within Studio. I am hoping this process will be simplified in upcoming releases of the product.

3-30-2016 1-03-01 PM

 

3-30-2016 1-03-09 PM

Updating an AppDisk

Currently there is no version management built into AppDisk. This means that each time you need to make an update, you are essentially cloning the existing AppDisk, making changes to it and then reassigning the new AppDisk to the Delivery Groups. It is also worth noting that you CANNOT resize an AppDisk when creating a new version.  

To update an AppDisk, click on the AppDisk node within Studio, highlight the AppDisk you would like to update and select “Create New Version” from the Action pane.

On the next screen, select the Pooled Random machine catalog that you would like to use for the preparation VM. Again a VM needs to be available within that Machine Catalog to perform the update.

You then name the AppDisk with version information and click “Create New Version”. This kicks off the AppDisk creation process as detailed earlier. AppDNA compatibility analysis will be carried out against the preparation VM machine catalog once the new version of the AppDisk is created.

Once the new version is ready, you can assign the AppDisk to the required delivery groups and unassign the old version. This will once again kick off the AppDNA compatibility analysis.

3-30-2016 1-21-46 PM

Resizing an AppDisk

There are no options to resize an AppDisk from within studio today. You would have to resize at the hypervisor level and then reimport and reassign the AppDisk. I am hoping that this is addressed in the near future.

Deleting an AppDisk

To delete an AppDisk within Studio, click on AppDisks, highlight the AppDisk you would like to delete and select “Delete AppDisk” from the Action pane.

3-30-2016 1-39-02 PM

 

Final Thoughts

As I described in my previous blog on Image Management, AppDisk takes us one step further in simplifying Image Management. However App Layering is not a one size fits all solution and should be used in conjunction with other solutions like application isolation and the likes. I am quite impressed with AppDisks for a v1 product. The performance has been very good considering I conducted most of my testing in my lab using SATA storage. However, I do hope that certain administrative tasks (like AppDisk resizing and versioning) improve in the near future.

XenApp/XenDesktop 7.8 – A Big Step Forward In Image Management

appdisk-slide_2

Citrix released XenApp and XenDesktop 7.8 on 02/25 and with it came numerous feature enhancements. In this post, I want to focus on two of these features as it addresses a major challenge most Citrix administrators have to deal with today.

The Problem

It is safe to say that every enterprise customer that I work with uses Provisioning Services for XenApp and Pooled VDI for all the management, storage and performance benefits. However, a majority of these customers end up having to manage multiple images (sometimes >10). In most cases, applications are locally installed, in a few cases, App-V is used in conjunction with locally installed apps and in rare occassions, SCCM/LANDESK and similar ESD tools are used. For Pooled desktops, its a combination of locally installed apps and apps delivered via XenApp for the most part. On some occassions third party tools are also used. The net result is that multiple dedicated resources spend most of their time updating these images and managing application updates.

So what does XA/XD 7.8 offer to solve this problem?

AppDisk

I constantly have discussions with my customers around how to solve the problem of image management, and it usually boils down to separating the applications from the operating system as far as possible. XA/XD 7.8 introduces AppDisk, which provides the ability to manage your applications independently of the base image. AppDisk falls under application layering, which has been around for a while now. You can add any number of applications to an AppDisk and the AppDisk can then be tied to multiple machines at the same time running different operating systems. So if you are an Enterprise customer that has multiple XenApp silos today due to different business units requiring different applications for instance and have multiple PVS images that you manage for this purpose, you could potentially cut down to one image for each OS and then use AppDisk to layer the applications thereby making management of the images a lot easier. Not only that, application updates become a lot easier and the maintenance windows will reduce significantly. Also if you wanted to replicate your applications across multiple datacenters, it is as easy as copying these appdisks over.

Integration of AppDisk with AppDNA

There are a number of vendors today that offer layering solutions, including some that partner with Citrix. What truly differentiates AppDisk is our integration with AppDNA. When there are multiple layers tied to a delivery group for instance, AppDNA lets the administrator know how a change in one layer could potentially impact compatibility between layers and can reorder the layers if needed. Similarly, AppDNA can also inform the administrator if an AppLayer is incompatible with a specific OS. So if I were to tie the same AppDisk to multiple delivery groups delivering different operating systems, thanks to AppDNA, you can quickly determine if that AppDisk is compatbile with the target OS. This is truly a differentiator and removes a lot of the guess work and manual labor involved in compatibility analysis.

With all that said, layering is not a one size fits all solution for application deployment. There are various challenges. When you use mutiple Appdisks for instance, it is important to understand the dependencies between layers to make sure the layers can working with each other and there are no conflicts. In large environments, there could be hundreds of layers, each layer having a large number of applications. So management could get complex in those cases. Also, AppDisk is not supported on dedicated desktops today. Also important to note that PvD and Appdisk cannot be used together today

App-V Packages

Another key feature in XenApp/XenDesktop 7.8 is the ability to publish App-V packages that are stored in a network share without needing the App-V infrastructure. The process is no different from publishing a natively installed application. You may ask why even go down this path when you could address most use cases directly with AppDisk. There are a couple of reasons. First, AppDisk does not provide application isolation. So, if you require application isolation, perhaps to run multiple versions of the same application for instance, you would need to use a technology like App-V. Secondly, if you already have your desktop teams leveraging App-V to sequence packages, it makes sense to deploy the same packages within your Citrix environment instead of reinventing the wheel.

Final thoughts

Its human nature to be enamored by the latest shiny toy. But in the case of application management, there is no one size fits all solution. But with the XA/XD 7.8 release, there are various options available for packaging and delivering applications thanks to the tools Citrix added. Does that mean the tools we provide will address 100% of the use cases out there? Probably not. We have a number of partners who add further value through their solutions. Fine examples are Liquidware Labs, FSLogix and Unidesk.

I believe that a lot of enterprise deployments, will continue to deploy core applications natively in the base image, either locally installed or using App-V and the likes. However, AppDisk with AppDNA is a great solution to manage business unit specific applications that were silo’d in the past and increased the infrastructure and operational overhead substantially. To conclude, I would highly recommend that you try XenApp/XenDesktop 7.8 in a lab environment and get familiar with AppDisk and App-V package deployment.

 

Step by step guide on configuring the Rasperry Pi to deliver Citrix Apps and Desktops to your End Users!

IMG_20160209_012418

Why The Raspberry Pi?

In working with my customers over the years, end point management is something most struggle with to this day. Some choose to still provide their end users with fat clients, having to figure out how to manage the operating system and applications while making sure the device is secure. This tends to be a daunting challenge both from an operational and financial perspective. Others choose to leverage thin clients when possible but struggle in deciding what the right device is from a price and functionality pespective. A lot of times, they spend upwards of $500 on these thin clients, which still run a Windows Embedded OS that still needs to be managed and in some ways defeats the purpose of a thin client. While this is not true in every case, I would say that the end point management dillema is one of the biggest factors in virtualization initiatives stalling at my enterprise customers.

Over the past couple of weeks, I have been taking a closer look at the Rapsberry Pi. For those of you not familiar with the Raspberry Pi, I would highly recommend you check this out. While the use cases for the Pi are immense, what peaked my curiosity were recent blogs by Martin Rowan and Trond Eirik Haavarstein around how they leveraged the Pi as a thin client replacement for Citrix workloads.

Now before we go further, its important to understand why this was interest to me. First off, the device can be made highly secure by running stripped down Linux OS. Secondly, a Raspberry Pi 2 costs roughly $35. Tack on a case and adequate storage, the device is still under $50. So if there was a way to effectively deliver Citrix workloads leveraging this device, this would be the cheapest thin client out there! Not to mention a simple support and maintenance strategy, GET A NEW ONE! 🙂

How Does One Get Started?

I decided to get myself a Raspberry Pi 2 and give it a test run. I ordered the Vilros Raspberry Pi 2 Complete Starter Kit off of Amazon for around $55 (its around $70 now but price fluctuates). I would highly recommend going for a starter kit, either the one I got or the even more popular Canakit as these include everything you’ll need including wi fi adapter, case, hdmi cable, heat sinks, storage, power adapter etc. I also ordered a couple of additional micro SD cards. I wanted to have different OS builds on each of the cards, making it easy for me to showcase different solutions by just switching the micro SD cards on the Pi.

I looked at ThinLinx, Raspian Jessie and the Raspberry Pi Thin Client Project as potential options, but decided to start with ThinLinx and Raspbian Jessie. Before you get started, I highly recommend you read the this blog by Eric on Running Citrix workloads on ThinLinx and this blog by Martin Rowan on configuring and optimizing Citrix Receiver on Raspbian Jessie.

Approach 1: ThinLinx

Lets start with the ThinLinx build. ThinLinx OS (TLXOS) helps make effective thin clients out of old PC’s, Intel Compute Stick, Intel NUC and Raspberry Pi. TLXOS supports various protocols including Citrix HDX, RemoteFX 8.1, RDP. Intel showcased their NUC devices running ThinLinx at Citrix Summit this year. Check out the video. In addition Rachel Berry wrote an excellent blog about how Citrix leveraged Intel NUCs running ThinLinx for our Demos and Labs at Citrix Synergy 2015.

The process is as follows:

  • Go to this website and download the TLXOS Installer for Raspberry Pi.
  • Connect your micro SD card to your PC and run the TLXOS installer. This will format your micro SD card and copy the TLXOS image on the card.
  • From the same website mentioned above, download the ThinLinx Management Software (TMS) and install the software on a windows test machine. This is fairly lightweight software and can run on a VM as well.
  • Insert the micro SD card with TLXOS into the Raspberry PI and start it up.
  • Run the TMS app on your PC, which will detect the PI running TLXOS. You can configure the PI running through the management software.
  • In my case, I used TMS to make sure Citrix HDX is selected under the “Protocol” section. You could also choose “Web” and run in Kiosk mode if you’d like user to connect in that manner. You can also speficy a name for the device, upgrade software on the device, push SSL certs (required if your backend resources are running internal certs) etc.
  • On the PI, specify the Native Receiver URL. You will then be prompted for your credentials. Once thats set, you are good to go! You should see your apps and desktops, which you can then launch.

Video showcasing Citrix on a Raspberry Pi 2 running TLXOS

My Thoughts on the ThinLinx Option.

ThinLinx adds about $10 to the cost of the solution, bringing it to $69 in my case. However that is still a lot cheaper than your main stream thin clients. In addition, you get complete management capabilities which is absolutely necessary in an Enterprise environment. TLXOS was extremely easy to get going and the functionality was superb both for regular compute and for multimedia. The Citrix HDX protocol on TLXOS supports H264 decode upto 30 fps at 1080p resolution. There was no tinkering to get receiver to work. It just worked! I did notice some artifacts with the mouse cursor (as you might notice in the video) but not all the time. Overall I was very pleased with the simplicity of the solution and the overall performance of Citrix Workloads on TLXOS.

Approach 2: Raspbian Jessie

Raspbian OS is based off of Debian Linux. Jessie is the current version. There are two versions available for the PI – a full desktop image and a minimal image. I went with the full image for my tests. The Raspbian Jessie solution that I tested was unmanaged, unlike ThinLinx. So I had to install the OS, install receiver, tweak parameters to optimize performance etc. Nonetheless, the end result was a great performing thin client. I followed Martin Rowan’s blog for the various tweaks. I will try and outline them once again but wanted to call out that the tweaks were from his blog. So here are the steps:

  • Download the Raspbian Jessie full desktop image from this link.
  • Download Win32DiskImager and install on your system
  • Extract the Raspbian Jessie Image from the zip file
  • Connect your micro SD card to your PC
  • Run Win32DiskImager and use the extracted image as your source and the micro SD as your destination. This will format and copy the Raspian Jessie image on the SD card.
  • At this point, remove the SD card from your PC and plug it into the Pi and boot the Pi.
  • Run the following optimization commands in Raspbian Jessie. Once again, read Martin’s blog for more details.
    • Expand Filesystem
      • Run sudo raspi-config and select option “1 Expand Filesystem“. Reboot the Pi.
    • Run sudo raspi-config and select option “4 Wait for Network at Boot“, then select the option for “Slow Wait for network connection before completing boot“.
  • Install Citrix Receiver for ARM
    • Download Citrix Receiver for ARM (ARMHF) from the following link (under Debian packages)
    • Also download the USB Support package (ARMHF)
    • Install the Receiver: sudo gdebi icaclient_13.2.0.322243_armhf.deb
    • Install the USB Support package: sudo gdebi ctxusb_2.5.322243_armhf.deb
    • Further Optimizations (Optional)
      • Increase Frame Buffer – Section 2.1 in Martin’s blog
      • Switch to using libjpeg62-turbo – Section 2.2 in Martin’s blog
      • Disable H264 Graphics – Section 2.3 in Martin’s blog
      • Disable Mulimedia (HDX Mediastream redirection) – Section 2.4 on Martin’s blog.
      • Overclock your Pi – Run raspi-config to overclock your Pi and get some additional juice.
    • Start Receiver and specify URL to connect to your Citrix Storefront server. At this point you will be prompted for credentials.
    • Now you will have access to your desktops and apps.
  • I did run into an issue with Audio being routed over HDMI and not the headphone jack. To switch this back to the headphone jack, follow the instructions here

Video showcasing Citrix on a Raspberry Pi 2 running Raspian Jessie

Thoughts on Raspbian Jessie

My experience so far with Raspbian Jessie has been good. A little more tweaking and hacking as compared to ThinLinx, which worked out of the box. You get to install the latest receiver though. General performance for productivity apps was great and on par with ThinLinx. The boot was a lot faster than ThinLinx (<10 seconds).

Final thoughts based on testing so far

Is the Rasperry Pi a good solution for all use cases at the moment? Probably not. Does it fit a majority of the use cases? I would say so based on the testing so far. There are definitely some gaps, like having a power button perhaps (hopefully in Raspberry Pi 3), multi montor support to name a couple. Another major requirement for most organizations out there is Unified Communications, and in most cases, its Skype For Business. Citrix has excelled in supporting Lync and now Skype for Business in a virtualized environment while offering a native-like user experience with out of band peer to peer communication as far as voice and video traffic goes. Watch this video which compares the native vs optimized user experience side by side. One of the pieces that makes this possible is the Real Time Media Engine (RTME) which is installed on the client. Today, there is no RTME client for the ARM processor. You can still support Sype but all the processing will occur on the backend servers. I am sure an ARM based RTME client is on the list of good to have’s for Citrix and its probably just a matter of time, especially with the rapid popularity of ARM based devices like the Pi and Intel Compute Sticks. Hoping my friend and fellow citrite Scott Lane will work some magic to make this happen 🙂 Read this blog by Chris Fleck on why he believes the Raspberry Pi could totally disrupt the PC industry. I tend to agree with Chris.

Whats Next?

I will soon be testing the Raspberry Pi Thin Client Project, specifically the 1.99 release which has Citrix Receiver 13.3 bundled in. I hope to have a follow up blog on this. On the fun side, I plan to build an Arcade Machine for my kids based on the Pi and perhaps even a media center, although I really love my Roku 🙂 Check out some of the fun projects out there based on the Pi. As always I look forward to everyone’s feedback and do comment if you have ideas on future blog topics.

More soon..

George