Hummingbird is the UC Santa Cruz campus open access computational cluster. It has a number of preinstalled software packages that are used in the sciences and engineering. The cluster can also be used for other applications in Social Sciences, Humanities and Arts. Many users compile software themselves in their home directories, this is useful for those who want to run a specialized environment. See below for more information on upcoming cluster rebuild.
Currently the cluster consist of:
- 404 Intel cores – 1 node is 44 cores/256 GBs / 15 nodes are 24 cores/128 GBs (GBs = Gigabytes)
- 288 AMD cores – 2 nodes 48 cores/192 GBs / 3 nodes are 64 cores/256 GBs
- 1 GPU node that consist of 24 cores/96GBs RAM and 4 Telsa P100 GPUs
- There is a ZFS storage back end for the home directories of approx. 90 TBs
- SLURM batch management (4 partitions: 5 node-Instruction, 15 nodes-128×24, 1 node 4GPUs and 1 node-256×44)
- There is a maximum of 3 nodes per single job.
- Modules software environment
- All compute nodes have 10 Gbps interfaces (connecting all nodes) (Gbps = gigabits per second)
- Detailed information is located here.
Hummingbird is a growing resource for the campus that you can add more hardware to and increase the availability of more compute cycles. The cluster can be utilized for many different different applications from a simple 1 cpu to multiple cpus. You could save money by not buying a complete computing solution (workstation, small cluster) and therefore save you time and more money not having to administer, upgrade and/or repair your equipment. If you are interested in learning more Contact us!
We have 15 Intel nodes with 2 x 12 cores / 128 GBs RAM/2 x 22 cores / 256GBs RAM. Which will give a total of 404 Intel cores. The AMD nodes total core count is 288 and will be utilized for instructional use such as BSOE Genomics courses.
The cluster environment is built on CentOS 7 using the OpenHPC cluster environment packages. OpenHPC has common scientific libraries and use of Environmental Modules to streamline the use of applications and software. OpenHPC uses the batch scheduler system called SLURM. This is a job management system that will be more in line with many other high performance computing cluster centers.
There is also a baseline of software packages that are in general use across disciplines. We will canvas the cluster user community to ask what software they would like to see installed. We will do our best to accommodate people’s request but we may not be able to do all requests. Keep in mind that if you are wishing the installation of licensed software it can be handled in a few different ways: install in your own home directory, have a site wide license or we restrict the use of the software to a designated group.
There is also the opportunity for those on campus considering the purchase of an individual cluster to consider two options:
- Add hardware to Hummingbird and gain even more access to CPU cycles.
- Place your cluster in the Data Center and have the Hummingbird head node manage it.
Both of these arrangements would save financial and human resources and give you access to even more CPU cycles and backed up storage both static and scratch.
The cluster will have access to the Science DMZ – fast 100 Gbps networking connecting major research universities and labs.
Questions, Comments, Donations? Send e-mail to firstname.lastname@example.org.
Current Cluster Load