From the point of view of the College of Natural Sciences, the network infrastructure that connects our buildings together, and provides the "on-ramp" to the internet, is critical to providing effective support for teaching and research. The College and its departments were early leaders in making difficult investments to build and provision network resources — and to hire dedicated staff to support them. Even before there *was* an OIT, we were pulling network cable and linking computers together. In several older buildings, we still maintain our own cable plant, which we've upgraded from thick wire to thin wire to twisted pair -- from 10 megabit shared to 100 meg to gigabit full duplex. And we've run our own fiber backbone at higher speeds in a number of places to alleviate bottlenecks.
Bottlenecks happen when there's more data coming into a segment than will fit. When that happens, the network becomes unstable, as packets are lost, time out, and need to be re-transmitted. It isn't just that the network slows down, but rather that connections get dropped altogether and fail.
As new buildings have come on-line, like the ISB and the LSL, we've been working toward forging a partnership with OIT that will enable us to contribute to the design of the building infrastructure and to use it effectively to support our research and teaching laboratories so that our students and faculty can be productive and access the services they need.
However, our speedy internal networks and shiny new buildings are connected together with a campus core infrastructure that is increasingly showing its age. When our servers and clients were all in the same building, this was less of a concern. But as we move toward a future where our departments are spread across multiple buildings, the interconnections become increasingly critical. Here are a couple of examples:
Currently, to provide teaching lab computing support between Morrill and the ISB, we maintain separate servers in each building and replicate 311 gigabytes of lab computer software images between the two buildings. That way, we can synchronize data between buildings only when necessary and all of the client computers in each building have a local connection to a server to perform nightly updates. It's been an effective workaround, but it can't scale.
In our Bioimaging class, students using 9 fluorescent microscopes routinely collect 2.8 megabyte images every few seconds for minutes -- or hours -- to study cell growth, division, or other processes. When we were all in one place, we could architect our local infrastructure to provide the support that was needed. But increasingly, we want our students to be able to access and work with their data anywhere. Try copying one of these "stacks" of images over your wireless connection and watch what happens.
We're moving toward an age of "Big Data". Students and faculty with ever faster computers can generate and work with vast quantities of data. To work with big data, you need to be able to get it, copy it, manipulate it, and move it around in real time -- from anywhere. To be a destination of choice, we will want our students and faculty to be able to play in this field. We support building the network infrastructure we need to make sure that UMass Amherst will remain a destination of choice going forward.
- Steven D. Brewer's blog
- Log in to post comments