BEAR Cloud Technology

Under the hood

Each of the systems running BEAR Cloud currently has 20 cores and 128GB of RAM, and we limit the number of cores available for VMs to 18 and available memory to 120GB. This ensures that the orchestration of VMs doesn’t interfere with running workload. We do not over-allocate cores or memory, so you can be sure that the resource provided to you is just for your use. The colours (or sizes) of available VMs have been tailored to ensure we can pack the systems sensibly and to ensure system features like non-uniform memory access (NUMA) work properly, we always allocate cores in pairs and use NUMA pass-through to the operating system running in the VM. Combined with the use of CPU pinning, variability of CPU performance is limited as workloads are not dynamically being moved between cores by the hypervisor system.

BEAR Cloud is designed to integrate with the BEAR environment, and we are able to connect systems directly to our high speed research network. We use the same fabric technology and have of tens of gigabits of connectivity to the research infrastructure. Our research data services are also connected to the same network meaning we can provide high-speed access to storage services. Where users are willing to give-up certain access to VMs (e.g. full root), we are able to provide direct, seamless access to research data storage.

Software defined networking is fully enabled on BEAR Cloud, and for users who need to define their own network infrastructure inside the cloud, VXLAN network is enabled. This allows user networks to be defined and fully encapsulated so that they are securely isolated from other users of the system. The Mellanox networking card we utilise has special hardware offload functions to ensure that the impact of VXLAN encapsulation is minimised, this helps ensure maximum network performance even in using software defined network topologies.

High speed interconnects are important for some workloads (e.g. MPI applications) and we are currently working to pass through our 100Gb EDR Infiniband network directly into VMs allowing us to provision isolated HPC systems on the fly whilst benefitting from the underlying high speed technology. This will be provided using SR-IOV, a system function that allows the creation of virtual functions on the card which allow a VM to directly interface to the device, in turn this provides near line-speed access to the device compared to running on a physical system.

BEAR Cloud runs on OpenStack, a community led group of open-source projects designed to run cloud environments. We use the CentOS community releases of OpenStack which track the major releases of OpenStack and we keep track annually of the releases to enable new features and functionality.