School solution?? Remote Desktop Service scalability/requirements

I am investigating the option of setting up a Remote Desktop Service for 300-500 concurrent student users. (but up to 3000 actual users)
I work at a public school and we have (5) 100 Mb connections to our colo (which has 500mb) to meet the traffic requests for each site.
I would like to start converting old PC's /laptops into "Dumb Terminals" and allow remote sessions to the client desktop on the server.(the server does not exist yet)

My questions are:
How scalable is this solution,(assume I am running Windows 2012R2 DataCenter/Hyper V)

How much Horsepower will I need to take care of my users? memory/processor needs per user?
(most if not all end user requests will be for web pages with minimal Office applications sprinkled in. no Video editing or large end user apps)

What connections speeds do I need between my Server,(cluster/farm) and my SAN for this to work?

How much bandwidth is required between the client and the Remote Desktop Server/(per user)?
(the most intense data transfer I can think of the user would need is for watching youtube videos)

In regards to storage is the remote desktop the end user sees duplicated for each actual user or is it shared in both common disk space and common memory?
(for example if everyone needs the office suite installed are they all going to have to have there own unique memory and disk space for each office session or are these common files shared?)

I already have a SAN with 6 Terabytes, my users will be saving little if any documents to there desktops,(we use google drive for our document storage)


Some of you out that have already done this I am sure, just trying to evaluate the process without "reinventing the wheel".


I do not have the funds for a Citrix environment due to licensing cost, however I figured a lot of Citrix Admin started on the same road I am on before switching to Citrix..What pitfalls will I see that may push me to upgrade to Citrix at a later point?



For any RDS setup that's not Citrix, disk speed is key. If you're SAN is 6Tbs but its 6tb of slow SATA storage, you're not off to a good start.

Your bandwidth sounds like it'd be fine if they're private circuits, worth while putting in some levels of QoS to guarantee performance. For 300-500 users I'd have about 20 session hosts with around 32GB of RAM if you want to cater for the future, although 24 would probably be more than enough still. I've actually seen this done with 16Gb of ram but it started to creek towards the end. If you use Hyper-V dynamic memory you could actually reduce your memory footprint as well.

With regards to resource; not all applications run in their own memory space, depends how well the application is written, if you want to isolate stuff from each other you could use App-V to sequence it.

You could also deduplicate the Windows session hosts as well as it's just going to be the same files over and over again - Actually, with MS deduplication in Server 2012 they recommend it for VDI.

One thing that may drive you towards Citrix is the managability, the tools for managing are better IMO - I've used both in large environments. However the main KEY thing in my eyes is Citrix PVS; If disk is becoming troublesome you could actually stream your servers and run them in memory with disk overflow (write cache) - Citrix servers are then lightning, this works with XenDesktop and Xenapp. If you've already invested in incredibly fast storage, Machine creation services is still good :)



For any RDS setup that's not Citrix, disk speed is key. If you're SAN is 6Tbs but its 6tb of slow SATA storage, you're not off to a good start.

Your bandwidth sounds like it'd be fine if they're private circuits, worth while putting in some levels of QoS to guarantee performance. For 300-500 users I'd have about 20 session hosts with around 32GB of RAM if you want to cater for the future, although 24 would probably be more than enough still. I've actually seen this done with 16Gb of ram but it started to creek towards the end. If you use Hyper-V dynamic memory you could actually reduce your memory footprint as well.

With regards to resource; not all applications run in their own memory space, depends how well the application is written, if you want to isolate stuff from each other you could use App-V to sequence it.

You could also deduplicate the Windows session hosts as well as it's just going to be the same files over and over again - Actually, with MS deduplication in Server 2012 they recommend it for VDI.

One thing that may drive you towards Citrix is the managability, the tools for managing are better IMO - I've used both in large environments. However the main KEY thing in my eyes is Citrix PVS; If disk is becoming troublesome you could actually stream your servers and run them in memory with disk overflow (write cache) - Citrix servers are then lightning, this works with XenDesktop and Xenapp. If you've already invested in incredibly fast storage, Machine creation services is still good :)



what is good baseline for a session (processor count etc?)



what is good baseline for a session host (processor count etc?)



i am on the citrix road ... but there are some points you should consider with MS only also:most baselining calculates for office users.
- with students you have 20-40 users doing the same action simultaneous.
- with 300-500 students you not only need a lot of storage-space (300*20GB are 6TB already), you have to calculate a really big amount of IOPS also.
- at my schools we have to reboot up to 100 devices within 5 minutes (many, many IOPS and CPU usage - this bursts every baselining).
- check if you need systemcenter-VMM if you think about Hyper-V (with Citrix and Hyper-v you need SCVMM)

with citrix i use PVS to eliminate the storage size and IOPS.



Share this

Related Posts

There was an error in this gadget