Vanderbilt University
Institute of Imaging Science
VUIIS Logo

VUIIS Compute Resources

Storage

Currently VUIIS has about 66TB of storage allocated on Distributed Online Research Storage (DORS)

The DORS system uses DataDirect Networks hardware specified and purchased by Jarrod Smith’s NIH S10 grant. DORS storage is accessible to users from their labs via either CNFS or CIFS and accessible to the ACCRE cluster via native GPFS protocols. DORS is a GPFS-based system and is similar to ACCRE’s /data partition, including nightly backups of DORS to tape managed by CSB and VUIT.

VUIIS Storage Table

Total Size TBTotal Used TB
5/19/1565.9051.1977.67%
8/14/1446.2136.3578.67%
12/31/1356.1341.5374.00%
12/31/1235.5122.4963.35%

DORS will initially cost no more than $480 per TB per year. As adoption of DORS increases, DORS pricing is expected to decrease.

VUIIS Computer Nodes

Computer Nodes on VUIIS Wiki (Campus Access Only)

vcn0 - Dual Quad Core Xeon 2.5GHz, nVidia GeForce 9800 GTX, 12GB of RAM

vcn1 and vcn2 - Dual Quad Core Xeon 2.5GHz, nVidia GeForce 9800 GTX, 12GB of RAM

vcn3 and vcn4 - Dual 6-Core Xeon E5-2630v2 2.6GHz, nVidia GeForce GTX 750 TI 2GB, 64GB of RAM

VUIIS Computer Lab

Computer Lab on VUIIS Wiki (Campus Access Only)

VUIIS-59NQYD1

Dell XPS 720 (Mid 2007), 3.0GHz Intel Core2 Duo E6850, 8GB RAM

Windows 7 Enterprise 64-bit SP1


CCMM1, CCMM2, CCMM3 (Deployed February 2015)

Apple Mac Mini (mid 2011), 2.0GHz Quad Core Intel i7, 16GB RAM, Dual boot (Bootcamp)

MacOS X 10.10.x Yosemite

ACCRE

ACCRE Website

Acknowledging ACCRE in Publications

When possible, we would greatly appreciate an acknowledgement of ACCRE in presentations and publications. Here is our suggested text: This work was conducted in part using the resources of the Advanced Computing Center for Research and Education at Vanderbilt University, Nashville, TN. We also attempt to maintain on this website an up-do-date list of publications based on your research using ACCRE that results in journal articles or conference proceedings.

Grant Assistance and Letters of Support

We are glad to assist you with the development of proposals that include the use of ACCRE resources. We will also provide a letter in support of the portion of your research and educational programs which use the facility. Please do not hesitate to contact ACCRE Administration for any such assistance, including budgetary concerns.

Summary Describing the ACCRE Facility

For your convenience, a general description of ACCRE facilities is available below. Please feel free to use any or all of the included text. For more technical detail, see the services pages and FAQ.

Vanderbilt University Advanced Computing Center for Research and Education

The Vanderbilt University Advanced Computing Center for Research and Education (ACCRE) is a researcher driven collaboratory, operated by and for Vanderbilt faculty. ACCRE was initially funded by a seed grant of $8.3M from Vanderbilt, with the expectation that the center would become largely self-supporting via contributions from individual Vanderbilt researchers. Over 600 researchers from over 30 campus departments and 5 schools have used ACCRE in their research and education programs. The computer cluster at ACCRE currently consists of over 6,000 processor cores and is growing. Funding for all hardware has come from external grants or startup funds contributed by collaboratory faculty. In addition to hardware, ACCRE has a staff of 10 support personnel who maintain and operate the cluster and provide user education and outreach.

The ACCRE High Performance Computing cluster has over 6,000 processor cores and is growing. Nodes each have 24 – 256 GB of memory. Compute nodes all run a 64-bit Linux OS and have a 250 GB – 1 TB hard drive and dual copper gigabit Ethernet ports. 48 Computing nodes are each equipped with 4X Nvidia GeForce GTX 480 GPU cards. Nodes are monitored via Nagios. Resource management, scheduling of jobs, and usage tracking are handled by an integrated scheduling system by SLURM. These utilities include an “advance reservation” system that allows a block of nodes to be reserved for pre-specified periods of time (e.g., a class or lab session) for educational or research purposes.

IBM’s General Parallel File System (GPFS) is used for user home and data directories and scratch space. The ACCRE filesystem provides over 500 TB of usable disk space and can sustain more than 130 Gb/s of I/O bandwidth to the cluster. The home directories of all users are backed up daily to tape. The disk arrays are attached to a SAN fabric along with the storage nodes that then exports the file system to the rest of the cluster using a fully redundant design with no single point of failure.

The daily operation and maintenance of ACCRE is provided by ten support personnel, including eight system administrators, programmers and researchers with a combination of more than 60 years of computing experience. Support for system services is provided on a 24/7/365 basis for urgent issues, with on-call pager based support on nights and weekends. Cluster uptime has been better than 95% over the past three years. An online support ticket system is used to track and resolve problems and user questions. ACCRE staff are responsible for maintaining core system hardware, core system software, networking, user support, tape backup services, disk storage, logistical storage development work, education, and management/finance support.

ACCRE implements numerous security measures in order to maintain the privacy of user data. For example, firewalls filter cluster access from external hosts, passwords authenticate user login, and file permissions control data access on the user and group levels. File encryption software is available on the cluster and may be applied by users on a file-by-file basis for added security.

Additional security features are enabled for data falling under International Traffic in Arms Regulation (ITAR), Export Administration Regulations (EAR), Protected Health Information (PHI), Research-related Health Information (RHI), or proprietary control. The Principal Investigator (PI) of each group completes a Disclosure for his/her research group specifying whether the group will work with ITAR, EAR, PHI, RHI, or proprietary data; new group users require explicit approval from the PI. This policy ensures that the PI controls user access to any restricted files and can educate users on the group’s internal security precautions. Groups working with ITAR, EAR, PHI, RHI, or proprietary data are noted with a special designation to differentiate them to cluster administrators, and permissions on these data types are limited to group readable and writeable only. If users inadvertently leave restricted files as world readable or writeable, a script changes the permissions to ensure that only members of the appropriate group are able to read or write to these files.

With the exception of users in ITAR or EAR groups, users are allowed to mount their ACCRE files to a local computer via SAMBA, which requires authentication. NFS mounting is not allowed since this method requires no user authentication.