Hummingbird is the UC Santa Cruz campus open access computational cluster. It has a number of preinstalled software packages that are used in the sciences and engineering. The cluster can also be used for other applications in Social Sciences, Humanities and Arts. Many users compile software themselves in their home directories, this is useful for those who want to run a specialized environment. See below for more information on upcoming cluster rebuild.
Currently the cluster consist of:
- 140 Intel cores – 1 node is 44 cores/256 GBs / 4 nodes are 24 cores/128 GBs (GBs = Gigabytes)
- 288 AMD cores – 2 nodes 48 cores/192 GBs / 3 nodes are 64 cores/256 GBs
- There is a ZFS storage backend for the home directories of approx. 90 TBs
- It uses the Sun Grid Engine batch scheduler
- All compute nodes have 10 Gbps interfaces (connecting all nodes) (Gbps = gigabits per second)
- Intel nodes have 400 GBs scratch disk mounted as /scratch
- Detailed information is located here.
Hummingbird is a growing resource for the campus that you can add more hardware to and increase the availability of more compute cycles. The cluster can be utilized for many different different applications from a simple 1 cpu to multiple cpus. You could save money by not buying a complete computing solution (workstation, small cluster) and therefore save you time and more money not having to administer, upgrade and/or repair your equipment. If you are interested in learning more Contact us!
We are going to rebuild the Hummingbird cluster this summer 2017.
We have plans to add 10 additional Intel nodes with 2 x 12 cores / 128 GBs of RAM. Which will give a total of 460 Intel cores. The AMD nodes will be utilized for instructional use such as BSOE Genomics courses.
We are investigating adding a node with GPU capabilities, it would be available in August 2017.
The new cluster environment will be built on CentOS 7 using the OpenHPC cluster environment packages. OpenHPC will introduce common scientific libraries and use of Environmental Modules to streamline the use of applications and software. OpenHPC uses the batch scheduler system called SLURM. This is a job management system that will be more in line with many other high performance computing cluster centers.
There will also be the addition of a baseline of software packages that are in general use across disciplines. We will canvas the cluster user community to ask what software they would like to see installed. We will do our best to accommodate people’s request but we may not be able to do all requests. Keep in mind that if you are wishing the installation of licensed software it can be handled in a few different ways: install in your own home directory, have a site wide license or we restrict the use of the software to a designated group.
There is also the opportunity for those on campus considering the purchase of an individual cluster to consider two options:
- Add hardware to Hummingbird and gain even more access to CPU cycles.
- Place your cluster in the Data Center and have the Hummingbird head node manage it.
Both of these arrangements would save financial and human resources and give you access to even more CPU cycles and backed up storage both static and scratch.
The cluster will have access to the Science DMZ – fast 100 Gbps networking connecting major research universities and labs. The Science DMZ has been operational for some time and is slowly being expanded.
Utilize Parallel file system for increasing data speeds.
Questions, Comments, Donations? email@example.com