Breadcrumbs

Page title

Technical Grant Boilerplate

Main page content

The University of Texas System of academic and health institutions is one of the premier open science research organizations in the world. Its nine academic institutions include the flagship UT Austin campus and a number of growing, and potentially R1-class, universities. Its six health institutions possess national reputations earned for leadership in areas of biomedical research ranging from cancer to infectious diseases. The collective research expenditures of the UT System institutions exceed two billion dollars per year, with significant funding from every major federal funding agency including NIH and NSF.

UT System is committed to the preservation and growth of research leadership at its institutions, and its labs depend on having better access to powerful, comprehensive IT resources. The advancement of scientific research is increasingly enabled through the use of computing technologies, ranging in type and scale from laptops and desktops to supercomputers and clouds, including storage, visualization technologies, networks, and scientific software. In the past decade, the explosion of digital data produced by more powerful computers and by increasingly powerful scientific instruments, such as high-speed video microscopes, sensor networks, DNA sequencers, and MRI systems, has driven a corresponding explosion in informatics and analytics-based computational research. Biological and biomedical research in particular has benefited from this proliferation of data, more powerful computing and larger storage systems, and the development of new techniques and software for data-driven computational research.

UT System institutions have significant advantages in this highly competitive research environment:  UT System's support for research infrastructure presents a huge advantage to the fifteen System institutions; the Texas Advanced Computing Center (TACC) at UT Austin is an international advanced computing center that provides a competitive advantage for all UT System institutions; and a desire to collaborate among these 15 institutions leverages their research, financial, and technical resources to maximize available funding and continue to develop a scientifically powerful research IT infrastructure that benefits all UT institutions.

The University of Texas Research Cyberinfrastructure (UTRC) project originated as a strategic investment on the part of the UT System Board of Regents to support high performance computing (HPC), to enhance network throughput, and to provide data storage that will advance biomedical research across the institutions of the System.  In November 2010, the Board of Regents approved $23 million for the UTRC.  Specifically, monies were invested to:

  1. Increase the speed of inter-campus networking to 10Gbps – This will result in very high bandwidth throughput from the researchers’ workstations and servers to the TACC HPC resources and to the UTRC storage systems as well as to research colleagues around the world.  The UTRC network is a dedicated 10Gigabits using Juniper carrier class MX routers from MX80s to MX960s with dedicated Modular Port Concentrators and minimum 80Gigabits of capacity in the Switch Control Boards and Routing Engines. The core MX960s are completely non-blocking to all traffic streams.  The Multiprotocol Label Switching (MPLS) network fabric supports traffic engineered IPv4 and IPv6 datagrams and layer 2 Ethernet traffic.  The last mile metro topology is built with minimum of 40 50Gigahertz waves on 100Gigahertz spacing in Dense Wave Division Multiplex (DWDM) optically protected transports with Forward Error Correction to elevate the reliability.
  2. Provide additional high performance computing capacity and staffing support - TACC's "Lonestar" Dell Linux Cluster is a powerful, multi-use HPC and remote visualization resource with a theoretical peak performance of 309 TFLOPS and a total memory of 45TB. It contains 23,232 cores within 1,936 Dell PowerEdgeM610 compute blades (nodes), 24 PowerEdge R610 compute-I/O server-nodes, and two PowerEdge R710 login nodes. The system storage includes a 2.4PB parallel Lustre file system, and 276TB of local compute-node disk space. Lonestar also provides access to 14 large memory (1TB) nodes, and 16 nodes containing two NVIDIA GPU's, giving users access to high-throughput computing and remote visualization capabilities respectively. 
  3. Create shared research data storage for UT System Principle Investigators - The UT Research Data Repository, known as "Corral", consists of two 5PB installations, with geographical replication of data to data centers located in Austin and Arlington. Each installation consists of 2 Data Direct SFA10000K Storage controllers, 1800 3TB SATA hard drives, and 300 600GB SAS hard drives, connected via QDR Infiniband to 8 Dell R710 Servers, each of which is connected to upstream networks via 10 Gigabit Ethernet. Corral provides a variety of data storage, access and management mechanisms including integration with high-performance computation, metadata creation and search, and web-based access for data sharing. With a peak I/O capability of 20 Gigabytes per second and multiple 10Gb connections into the main installation in Austin, it is capable of simultaneously providing high-performance access to a large number of users across UT System, within the Texas Advanced Computing Center, and via external networks including Internet2.