The following news about distributed computing is from the Grid
Research, Integration, Deployment and Support Center (GRIDS), part
of the National Science Foundation Middleware Initiative (NMI).
For information or to be taken off
this distribution, please email
1. Hot Off the Grid
GRIDS CENTER SOFTWARE SUITE UPDATED FOR
NEW NSF MIDDLEWARE INITIATIVE RELEASE 3.0. With the
release of NMI-R3 on April 28, 2003, GRIDS has issued its third
on-schedule version of the GRIDS Center Software Suite. The software
serves as a stable foundation on which Grid implementers can build
customized applications for science and engineering. New to the
suite with NMI-R3 are a credential repository called MyProxy, a Grid
tool based on the popular Message Passing Interface standard called
MPICH-G2, and a tool for customizing GRIDS component configurations
called GridConfig. They join existing GRIDS components like
the Globus Toolkit™, Condor-G, Network Weather Service, Grid
Packaging Tools and GSI-OpenSSH. See
GRID MIDDLEWARE AND "CYBERINFRASTRUCTURE."
A blue ribbon panel recently reported to NSF on the emerging
cyberinfrastructure. According to its chair, Dan Atkins of the
University of Michigan (UM), "Grid middleware is a very critical
component. NMI and GRIDS address important needs not just by
providing stable tools, but also by defining processes for the
collaborative development of software for science and engineering."
Atkins said that the panel's 14 months of inquiry showed that prior
ad hoc efforts to develop infrastructure had been in danger of
becoming "balkanized," with many differing research communities
developing independent -- and often incompatible -- solutions to
similar problems of interoperability and resource sharing. "Now we
are at an inflection point," he said, "where the emerging technology
is helping users pull together the whole range of on-line resources
so virtual communities can become real." Atkins is a professor of
information and computer science at UM, and he served also as the
founding dean of the university's School of Information. The panel's
report is at
One of the GRIDS Center's target communities is NEES, the George
Brown, Jr., Network for Earthquake Engineering Simulation. Funded by
the National Science Foundation (NSF), NEES is a distributed virtual
laboratory for earthquake experimentation and modeling. Its users
are researchers who seek to design buildings and other structures
that are more resistant to seismic events and disasters in general.
An ambitious aspect of NEES called NEESgrid is a networked
infrastructure that facilitates integration of diverse systems such
as instrumentation (including huge shake tables, centrifuges and
tsunami wave tanks), computational resources and collaborative
environments. Several principal investigators from the NSF
Middleware Initiative (NMI) GRIDS Center are also prominent members
of the NEESgrid team. This overlap is helping to speed up progress
by NEESgrid, which is building its applications on the GRIDS Center
Software Suite (http://www.grids-center.org).
Because NEES and NEESgrid are scheduled to operate through 2014,
they represent a long-term NSF commitment to using the Grid for
earthquake engineering. GRIDS is also partnered with other NSF
investments like the Grid Physics Network (GriPhyN) and TeraGrid to
provide a stable substrate of middleware on which such communities
can build custom applications. Collectively, they form the front
line of "cyberinfrastructure" envisioned in the recent report (http://www.cise.nsf.gov/news/cybr/cybr.htm)
of a blue-ribbon panel that advocates substantial new funding for
NSF to stimulate projects across all science and engineering
disciplines, with activities like NEES, GriPhyN and TeraGrid as
Three shake tables at
the University of Nevada Reno are used to investigate how this
40-percent scale model of a concrete slab-on-steel girder bridge
responds to seismic stimuli. (Courtesy of Gokhan Pekcan, UNR.
Prior to the 2001 advent of NMI, research communities like NEES
might have struggled to create their own separate IT
infrastructures, with redundant efforts and a lack standardization.
Through NMI, NSF funded the GRIDS Center to create a more uniform
middleware infrastructure upon which communities can build their own
applications, achieving efficiency and interoperability that
wouldn't otherwise be possible. The GRIDS suite provides NEES with a
long-term, sustainable base for the continued evolution of NEESgrid
systems and software.
Building on GRIDS software, NEESgrid developed telepresence
capabilities to permit remote observation and participation in
experiments. This lets researchers view multiple data or video
streams and interact with colleagues or equipment during real-time
tests at multiple NEES equipment sites. NEES engineers will also
have access to a repository of data from experiments and
simulations, in addition to a simulation software repository.
Gokhan Pekcan is the researcher who worked most closely with the
NEESgrid Systems Integration (SI) team as the initial NEESgrid
software distribution was being developed. An earthquake engineer
in the Department of Civil Engineering at the University of Nevada,
Reno (UNR), he works with Ian Buckle, the university's principal
investigator on NEES. With Oregon State University (OSU) and
Rensselaer Polytechnic Institute (RPI), UNR is an early adopter
among the 15 NEES sites that are "Grid-enabling" their resources.
They have deployed the NEESgrid Software Suite, including GRIDS
components like the Globus Toolkit and Condor-G, as fundamental
infrastructure for data acquisition, analysis and archiving.
"We are using the GRIDS distribution as the base of the NEESgrid
software, and deployment has gone tremendously well," Pekcan said.
"It was difficult at first because we weren't speaking the same
language as the NEESgrid staff. Before NEES, none of the earthquake
engineers was familiar with Grid concepts. But both sides were
determined to communicate well, and that's what has happened."
They began by defining and acquiring the needed hardware and
software components for NEESgrid. UNR's configuration, which Pekcan
said is similar to other NEES sites, has two servers running RedHat
Linux 7.3, with a third machine running Windows 2000 for data
acquisition. Testing began in earnest with the first NMI and GRIDS
release in mid-2002. Concurrently, UNR was installing its
NEES-funded shake tables, which will eventually be accessible to
remote users who will be able to conduct experiments, acquire data
and interact with colleagues dispersed around the world -- all in
NEES is already doing Grid-enabled simulations, and they are
working toward real-time remote collaboration via teleobservation,
telepresence, shared data, test visualizations, system
identification, and numerical computations. "We're laying groundwork
for real-time manipulation of shake tables," Pekcan said, "This
progress is relevant to NEES sites with large centrifuges and
tsunami wave basins."
The GRIDS Center is
working with NEESgrid, part of the George
Brown, Jr., Network for Earthquake Engineering Simulation.
Funded by NSF, NEES is a distributed virtual laboratory for
earthquake experimentation and modeling.
UNR's three shake tables are 14 by 14-foot biaxial platforms with
intricate components pressurized up to 5,000 pounds per square inch.
They have a combined payload capacity of 150 tons to test scale
models of bridges and buildings -- even soil samples -- which are
subjected to forces up to 1G in two directions simultaneously. Each
table may operate independently, in-phase (i.e., with the other two
combined to act as a single unit), or differentially with the other
tables to simulate spatial variation effects of earthquakes.
A major challenge addressed by NEESgrid is the synchronization of
experimental data and devices. NEES engineers envision having
multiple sites run simultaneous experiments, each dependent on the
other. Such dynamic circumstances mean devices will need to be
synchronized at the millisecond level, which requires an
extraordinarily efficient use of network and computational resources
by the underlying middleware infrastructure.
UNR has been able to do most of its own Grid troubleshooting,
Pekcan said, even without computer scientists on staff. Their campus
IT support office has helped troubleshoot network problems, and on
rare occasions when the NEES staff get stumped, he said, the UNR
computer science faculty lend a hand.
Now that UNR and other early adopters have done some spade work,
the remaining NEES sites will benefit from the lessons learned.
"There is no doubt we are on target to meet our milestones," Pekcan
said. "Progress is increasingly rapid as the September 2004 date for
a fully operational 15-site NEESgrid approaches."
GRIDS principal investigators on the NEESgrid team are Ian Foster
(University of Chicago and Argonne National Laboratory), Carl
Kesselman (Information Sciences Institute at the University of
Southern California) and Randy Butler (National Center for
Supercomputing Applications at the University of Illinois at
Urbana-Champaign). Besides those institutions, NEESgrid also
includes the University of Michigan and University of Oklahoma.
In addition to UNR, OSU and RPI, the NEES equipment sites will
include Brigham Young University, Cornell University, Lehigh
University, State University of New York (SUNY) at Buffalo,
University of California campuses at Berkeley, Davis, Los Angeles
and San Diego and University of Texas at Austin. See
3. What's Coming Up
Global Grid Forum 8
June 24-27, 2003 Seattle, WA The Global Grid Forum will hold its eighth meeting,
with a theme of "Building Grids -- Obstacles and Opportunities."
GGF8 will update global Grid practitioners, enthusiasts and
researchers on the current state of Grid technology. The program
will include esteemed keynote speakers, technology updates,
application updates, industry updates and special Grid debate
June 22-24, 2003
The Twelfth IEEE International Symposium on High-Performance
Distributed Computing will be a forum for presenting the latest
research findings on the design and use of highly networked systems
for computing, collaboration, data, analysis, and other innovative
tasks. HPDC provide a global meeting place for those interested
in Grid computing. A joint program of tutorials and keynote talks
will highlight major themes and recent developments in the field.
International School on Grid Computing
July 13-25, 2003
Vico Equense, Italy Several GRIDS leaders are helping to organize the
2003 International School on Grid Computing, co-sponsored by the
Global Grid Forum. The event will provide an in-depth introduction
to Grid technologies and applications. Its curriculum will cover
widely deployed Grid middleware (Globus Toolkit, Condor, Unicore),
along with Grid services and data services. Lectures will focus on
specialized topics such as applications and experiences with
bringing up production Grids. Hands-on laboratory exercises will
give participants practical experience with widely used Grid
middleware. A testbed environment -- connected to major
international science Grids -- will host widely used middleware
produced by projects in the US, the EU, and in Asia Pacific (AP).
Registration ends May 11. See
SC2003: Igniting Innovation
November 15-21, 2003
The SC conference marks its 15th year with SC2003. Thousands of
high-performance computing and networking experts will see the
latest technological tools, learn about new scientific applications,
and listen to other experts present their most recent research. See
January 19-23, 2004
San Francisco, CA
GlobusWORLD 2 will feature three tracks of invited speakers,
lecturers, interactive panels, and forward-looking roundtables on
Grid computing topics related to the Globus Toolkit. It follows the
successful first GlobusWORLD held in January 2003, with over 450
attendees from 25 countries. Sees
4. More about GRIDS
Part of the NSF Middleware Initiative (NMI), GRIDS is a
partnership of the Information Sciences Institute (ISI) at the
University of Southern California, the University of Chicago, the
National Center for Supercomputing Applications (NCSA) at the
University of Illinois at Urbana-Champaign, the San Diego
Supercomputer Center (SDSC) at University of California at San
Diego, and the University of Wisconsin at Madison. For more
information, see http://www.grids-center.org. To subscribe for GRIDS
updates, send mail to firstname.lastname@example.org with a message body
of "subscribe news" (without quotes). You will receive a
confirmation message with simple instructions on how to authenticate
The following news about distributed
computing is from the Grid Research, Integration, Deployment and
Support Center (GRIDS), part of the National Science Foundation
Middleware Initiative (NMI). Visit the GRIDS web at
For information or to be taken off this distribution, please email
1.Hot Off the Grid
GRIDS CENTER SOFTWARE SUITE
ADDS COMPONENTS FOR NMI-R2.
New software tools have been added to
the GRIDS suite in NMI-R2, the second release from the National
Science Foundation Middleware Initiative (NMI), which was issued on
October 25. GSI-SSH is a Grid security-enabled version of
Open-SSH, the popular communications tool. GPT (Grid Packaging
Tool) is used to bundle GRIDS components, and GridConfig is used to
configure and fine-tune them. The latest GRIDS Center Software
Suite includes new versions of the Globus Toolkit™, Condor-G and
Network Weather Service. Watch for new GRIDS releases as part
of NMI every October and April. See
to download the software. An NSF press release is at
GLOBUS WORLD IN JANUARY 2003 WILL GIVE USERS A
VIEW "UNDER THE HOOD" OF GRID COMPUTING.
The inaugural Globus World will be held in San Diego, January
13-17. It will feature three tracks of invited talks,
interactive panels, and roundtables, with presenters including
principal GRIDS Center participants. Three tracks (Enterprise
Planning for Grids, Architecting Grids with Globus Toolkit,
Developing & Administering Globus Toolkit) will offer strategic
perspectives to facilitate discussions for enterprise planning and
executive decision-making. See
“KERBERIZED” GRID COMPUTING.
An important NMI goal is the integration of Grid research
environments with the campus enterprise. One example is KX.509, a
client-side tool that extends the widely-used Kerberos campus
authentication mechanism for use in Grids.
KX.509 has been packaged with the GRIDS Center Software Suite in
both NMI releases (NMI-R1 and -R2). It was developed at the
University of Michigan under the auspices of a partner NMI team,
EDIT (Enterprise and Desktop Integration Technologies). The tool
provides a bridge between Kerberos and the Public Key Infrastructure
(PKI) associated with Grid security. GRIDS Center leaders like Carl
Kesselman believe KX.509 can play a crucial role in the adoption of
Grids on campuses and in other organizations where Kerberos is used.
“This is a significant development,”
said Kesselman, director of the Center for Grid Technologies at the
University of Southern California (USC) Information Sciences
Institute. “We have successfully deployed KX.509 across the USC
campus, which is a win for Grid users because it shows how their
applications can be integrated with Kerberos infrastructure, and
it’s a win for Kerberos sites because it shows they can be
hospitable to Grids.”
Interoperability is key to Grids,
whose architects are reluctant to dictate local choices for
security, authorization and authentication. Grids are designed to
give individual users and sites autonomy, while helping to ensure
that local choices can be based on a technology’s merit instead of
its popularity elsewhere.
The certificate and private key
generated by KX.509 are normally stored in the same cache alongside
the Kerberos credentials. This enables systems that already have a
mechanism for removing unused Kerberos credentials to also
automatically remove the X.509 credentials. Netscape or Internet
Explorer can then load a special library to access and use these
credentials for secure web activity.
To use KX.509, the user should be on
a system in an existing Kerberos realm and have a Kerberos login for
that domain. In other words, Kerberos client software should
already be installed, allowing KX.509 to generate a Grid certificate
and private key based on the user’s Kerberos credentials.
What is not required is the
presence of X.509 certificates, the format used for Grid Security
Infrastructure (GSI) by software such as the Globus Toolkit and
Condor-G. KX.509 is able to generate a GSI certificate that, when
used with either of those packages, can be fully recognized by any
According to Jim Pepin, director of
the Center for High Performance Computing and Communications (HPCC)
at the University of Southern California, "We see USC’s successful
campuswide implementation of Kerberized certificates with NMI as a
first step toward KX.509's broader use for Grid environments both at
USC and across the academic community."
Pepin pointed out that USC was
situated to capitalize quickly on KX.509 because researchers like
Kesselman have long been involved in helping to shape campus policy,
something other universities can emulate.
“This is the plumbing, and now we
need to build ‘appliances’ that use this capability across campus,”
he said. “We’re a huge university with many pedagogical and
research applications that could capitalize, including the Shoah
Visual History Foundation’s multimedia database of testimony from
Holocaust survivors, the Digital Encyclopedia of Los Angeles -- a
collaboration with UCLA to digitize motion pictures and other
artifacts -- and the Southern California Earthquake Center, part of
the Network for Earthquake Engineering and Simulation (NEES). Each
of these projects and others are now much better positioned to
deploy Grid tools thanks to our KX.509 deployment."
In non-Kerberos environments, to use
Globus Toolkit utilities on a local or remote machine, the user must
authenticate his or her identity to the machine with a Grid Security
Infrastructure certificate, also called an X.509 certificate. These
long-term certificates let the user create a short-term proxy
certificate that expires after a period generally defined by the
owner of the local or remote resource, after which a new proxy must
be generated to renew access.
KX.509 can actually be used in place
of permanent, long-term certificates. It does this by creating an
certificate and private key in X.509 format based on the user’s
existing Kerberos ticket. These credentials are then used to
generate the GSI proxy certificate in Kerberos environments just as
in the non-Kerberos example above.
SC02: From Terabytes
November 16-22, 2002 Baltimore, MD SC2002 brings together scientists,
engineers, systems administrators, programmers, and managers to
share ideas and glimpse the future of high performance networking
and computing, data analysis and management, visualization, and
computational modeling. This year, SC will highlight how we can use
our evolving cyberinfrastructure to tap into terabytes of data to
gain insight into creating a world that is safer, healthier and
better educated. The conference is sponsored by the Institute of
Electrical and Electronics Engineers Computer Society and by the Association for
Computing Machinery's Special Interest Group on Computer
Architecture. Presenters include GRIDS Center principals. See
January 13-17, 2003 San
Diego, CA The inaugural Globus World
will feature three tracks of invited speakers, lecturers,
interactive panels, and forward-looking roundtables. (See item
above under “Hot Off the Grid.) Details at
June 22-24, 2003 Seattle, WA Billed as "the
leading technical conference on Grids and Distributed Computing."
HPDC has issued a
call for papers
with deadline of February 2003.
4.More about the GRIDS
Part of the NSF Middleware
Initiative (NMI), GRIDS is a partnership of the Information Sciences
Institute (ISI) at the University of Southern California, the
University of Chicago, the National Center for Supercomputing
Applications (NCSA) at the University of Illinois at
Urbana-Champaign, the San Diego Supercomputer Center (SDSC) at
University of California at San Diego, and the University of
Wisconsin at Madison. For more information, see
subscribe for GRIDS updates, send mail to
email@example.com with a
message body of “subscribe news” (without quotes). You will
receive a confirmation message with simple instructions on how to
authenticate your subscription.
The following news about distributed computing is from the Grid
Research, Integration, Deployment and Support Center (GRIDS), part
of the NSF Middleware Initiative (NMI).
1. Hot Off the Grid
NMI-R1 PRODUCTION RELEASE.Following weeks of beta testing, the official NSF Middleware
Initiative Release 1 (NMI-R1) is now available at http://www.nsf-middleware.org/NMIR1/nmiR1.htm.
The release has GRIDS software deliverables that together are
expected to be used by large-scale distributed collaborations like
the Grid Physics Network (GriPhyN) and the Network for Earthquake
Engineering Simulation (NEES). In partnership with EDIT (also part
of NMI), GRIDS is working with early adopters at ten testbed
universities that will help integrate NMI-R1 with existing campus
GLOBUS TOOLKIT R&D 100 AWARD.
One NMI-R1 component is the new recipient of an R&D 100 award,
given annually by R&D Magazine to the 100 most significant
technical products of the year. The Globus Toolkit ™ 2.0 earned
this coveted award for becoming what The New York Times
recently called "the de facto standard" for Grid
computing. See http://www.grids-center.org/news_rd100.php for
2. Feature Story
GRID COMPUTING: NOT JUST FOR SUPERCOMPUTERS
The GRIDS Center has contributed core software to the initial
NSF Middleware Initiative release (NMI-R1). The Globus Toolkit,
Condor-G and Network Weather Service (NWS) combine to form a suite
of Grid applications that are packaged together for easy
installation, configuration and use. NMI-R1 is expected to become
the standard distribution for these popular tools, upon which
applications will be built by the TeraGrid, the International
Virtual Data Grid Laboratory (IvDGL), the Grid Physics Network
(GriPhyN), the Network for Earthquake Engineering and Simulation
(NEES) and other large-scale, distributed projects.
But the scalability of GRIDS software means that users at all
levels can benefit - you don't need access to a supercomputer.
Today's desktop PC is more than the equal of a 1992 supercomputer.
The availability of such affordable computing power can let
scientists and engineers completely reconceptualize their
research, taking advantage of distributed systems for resource
sharing, collaboration and data management.
Built on the Internet and the World Wide Web, the Grid is a new
class of infrastructure that provides scalable, secure,
high-performance mechanisms for discovering and negotiating access
to remote resources. Scientists are now sharing data and
instrumentation on an unprecedented scale, and other
geographically distributed groups are beginning to work together
in ways that were previously impossible. Grids rely on
Internet-based middleware - including NMI-R1 components like the
Globus Toolkit, Condor-G and NWS - that provides standard
protocols for access to on-line resources.
The GRIDS contributions to NMI-R1 share the following traits
primarily open-source, open-architecture
Runs on Red Hat Linux 7.2 or
Uses Grid Security Infrastructure
(GSI), based on Public Key Infrastructure (PKI)
Together manage complementary
requirements for sharing distributed resources
The Globus Toolkit (http//www.globus.org) is a community-based
set of services and software libraries that supports Grids and
Grid applications. The toolkit includes software for security,
information infrastructure, resource management, data management,
communication, fault detection and portability. Each component
defines protocols and application programming interfaces (APIs),
while providing open-source reference implementations in C and
(for client-side APIs) in Java. Its components can be used
separately or together to develop Grid applications.
Condor-G is a highly distributed batch system for job
scheduling and resource management in multi-domain environments.
Optimized to work with the Globus Toolkit's inter-domain
protocols, Condor-G contributes its own intra-domain resource and
job management methods to harness widely distributed resources as
if they all belong to a single domain. The combined result is a
full-featured front-end for computational Grids, letting the user
manage thousands of jobs running at distributed sites. It provides
job monitoring, logging, notification, policy enforcement, fault
tolerance and credential management.
NWS monitors and dynamically forecasts performance of network
and computational resources, using a distributed set of
performance sensors (e.g., network monitors, CPU monitors) for
instantaneous readings. The ability of its numerical models to
predict conditions is analogous to weather forecasting - hence the
name. When used with the Globus Toolkit and Condor-G, it lets
dynamic schedulers provide statistical Quality-of-Service
readings. NWS forecasts end-to-end TCP/IP performance (bandwidth
and latency), available CPU percentage and available non-paged
memory, automatically identifying the best technique to forecast
any given resource.
NMI-R1 also includes a tool called KX.509 from the University
of Michigan. It allows Kerberos sites to interact with Grids by
converting a user's credentials from Kerberos to PEM, the format
used by the Grid Security Infrastructure (GSI).
NMI-enabled Grid environments certainly provide high
performance, but that doesn't mean they require high-performance
computers. Although GRIDS software was developed for
high-performance computing, it will work just as well using
commodity desktop PCs. For that matter, today's supercomputers in
fact consist of many such off-the-shelf PCs - albeit numbering in
the thousands - that are configured in clusters that use Grid
software to work in concert. NSF's latest such system is known as
the TeraGrid, and it will be located at four separate sites (two
each in Illinois and California) connected by a 40
But don't wait for 2012 and your own terascale desktop PC. Get
started now with NMI-R1 at http://www.nsf-middleware.org. You might
be surprised how straight-forward it is to install, configure and
run your own Grid. See the Grid
Computing Primer for examples of applications, with links to project web sites.
3. What's Coming Up
Global Grid Forum 5
July 21-24, 2002
Global Grid Forum 5 (July 21-24) will be held in conjunction
with IEEE's High Performance Distributed Computing Symposium-11
(July 24-26) at the Edinburgh International Conference Center (EICC)
in Edinburgh, Scotland. Over 500 participants are already
registered to join the activities. Advance registration for the
conference -- which will include a talk by NMI program director Alan Blatecky
of NSF --
is available through July 10 at http://www.gridforum.org/Meetings/GGF5/Default.htm.
GGF is a community-initiated forum of individual researchers
and practitioners working on distributed technologies for
"Grid" computing. GGF provides working sessions for
Working Groups and Research Groups, as well as general information
and education for those who wish to "brush up" on what
is happening with Grid technologies, applications and initiatives.
The organization resulted from a merger of the Grid Forum, the
eGrid European Grid Forum, and the Asia-Pacific Grid community.
SC02 From Terabytes to Insights
November 16-22, 2002
SC2002 brings together scientists, engineers, systems
administrators, programmers, and managers to share ideas and
glimpse the future of high performance networking and computing,
data analysis and management, visualization, and computational
modeling. This year, SC will highlight how we can use our evolving
cyberinfrastructure to tap into terabytes of data to gain insight
into creating a world that is safer, healthier and better
educated. The conference is sponsored by the Institute of
Electrical and Electronics Engineers Computer Society and by the
Association for Computing Machinery's Special Interest Group on
Computer Architecture. See http://www.sc-conference.org/sc2002.