• Log In
 Visit the Pennsylvania State University Home Page

Science DMZ Research Network

Enabling Secure High-Performance Research Data Transfers

  • Security
  • Performance
    • Expectations
  • Design
  • Request Access
    • Request Connectivity
    • Science DMZ Workflow
  • FAQ
    • Project
    • Research Toolkit
You are here: Home / Home

Globus Workshop Report

On June 29, Vas Vasiliadis and Rachana Ananthakrishnan from the Globus team at the University of Chicago, joined us for a one-day workshop on the Globus Toolkit. Globus allows for fast transfers between data stores, data transfer nodes, and even personal computers. Globus has a useful graphical user interface (GUI) which enables “drag and drop” interfaces and identities based on the researchers’ home institution (e.g., the Penn State Access Account) and others in the Globus framework.

The workshop was a good combination of background, design, and hands-on exercises which allowed even a novice to get familiar with the Globus Web interface. In addition, Vas covered systems commands  and security considerations which enable a properly licensed Globus site to set up Globus endpoints (data storage and transfer servers).

In the afternoon, Rachana covered the command line interface (CLI), some of the application programming interfaces (APIs), and had working examples and some hands-on exercises (using Jupyter Notebook and the Globus Python SDK) of how to take advantage of the developer features of the Globus Toolkit.

The workshop was well attended (~50 people, including some non-Penn Staters) and paved the way for more Globus use at Penn State. Vas and Rachana used Amazon Web Server machine images (one for each workshop participant) to make for a wonderful, hands-on learning experience.

Thanks to Penn State IT, the Institute for Cyberscience, and the Vice Presidents for IT (CIO) and Research for sponsoring the Globus Workshop. Also, special thanks to Greg Madden, Ken Miller, Tonia Kephart, Jeff Reel, the Smeal College of Business RIIT Group, and Heather White for helping with the logistics.

Operating Innovative Networks Virtual Workshop

Internet2, Indiana University Global NOC, and ESnet is hosting a Virtual Operatying Innovtive Networks workshop.  Its hosted on Zoom and streamed over YouTube.
Presented by experts from the Department of Energy’s ESnet, Indiana University and Internet2, the workshop focuses on Science DMZ network architectures, perfSONAR performance measurement software, Data Transfer Nodes, and emerging Software Defined Networking technologies. Combined, these technologies are proven to support high-performance, big data science applications, while ensuring security and availability modern campuses and laboratories need.

Given this is a multi-hour event, we do not expect everyone to attend every session.  The material is meant to be viewed when you have time, it will be repeated at different times in the schedule,  and will be recorded.  Please attend at the times that make the most sense.  Sessions will start at the top of each hour, and will end after the presentation and Q/A.

Virtual OIN Schedule:

Science DMZ Sessions:

Engagement
Overview of the concept of Science Engagement: working with researchers to understand scientific use of networks and remove friction.
6/21/17 12:00 PM
6/22/17 6:00 AM

Architecture
Review of the Science DMZ paradigm, and examples of how it can be implemented.  Overview of the areas of friction the design patern attempts to address.
6/21/17 1:00 PM
6/22/17 7:00 AM

Security
Overview of network security, and ways the Science DMZ paradigm can be implemented to address major risks.
6/21/17 2:00 PM
6/22/17 8:00 AM

Data Transfer Nodes
Construction and use of Data Transfer Nodes (DTNs) for scientific data movement.
6/21/17 3:00 PM
6/22/17 9:00 AM

Globus
Introduction to the Globus data movement tool, and how it can be integrated into a scienticic workflow.
6/21/17 4:00 PM
6/22/17 10:00 AM

perfSONAR
Review of the perfSONAR network monitoring framework.  Ways it can be installed and used on campus to debug network problems.
6/21/17 5:00 PM
6/22/17 11:00 AM

Software Defined Networking Sessions:

SDN Intro
Introduction to the concept of Software Defined Networking (SDN)
6/22/17 12:00 PM

SDN Building / Current
Basic components in SDN, and a review of the current state of the art.
6/22/17 1:00 PM

Openflow Tutorial
A tutorial in how to use Openflow to control portions of a network.
6/22/17 2:00 PM

SciPass
An introduction to the IU SciPass tool – a security architecture that facilitates data movement.
6/22/17 3:00 PM

GRNOC Dist Tool
An overview of a tool used by the IU GRNOC to manage software.
6/22/17 4:00 PM

Internet2 Advanced Layer 2 Service – AL2S
Internet2’s Advanced Layer 2 Service, and how it can be used for network research.
6/22/17 5:00 PM

They will also be streaming the event on youtube, for those that cannot
get zoom to work or would prefer to have a non-interactive experience.
The youtube links are as follows:

– Weds June 21 (Virtual OIN – Day 1)

http://www.youtube.com/watch?v=0gDygbZRt5U

– Thurs June 22 (Virtual OIN – Day 2)

http://www.youtube.com/watch?v=yiI56QR6zsU

The materials for the OIN workshop (as well as videos, when they are
finished being edited) will be stored here:

http://iu.box.com/v/OIN20

For future remote events, please see http://www.oinworkshop.com/

Research Network Infrastructure Router Move

At 5AM Wednesday morning, June 21st, Enterprise Networking and Communication Services will move a Research Network(RN) redundant router from USB2 to Tower Road as part of the Critical Services Migration project. Research Network ethernet fabric in Henderson, Noll, BBH, Frear South, and on Berks campus will be down during the maintenance window move. There is no other outage expected, however, there will be limited resiliency of the Research Network.

This work can be tracked in CHG0041778.

Work is expected to be completed by 7AM.

For more information, please contact Operations Center (814-865-4662).

http://alerts.its.psu.edu/alert-4647

Next Operating Innovative Networks (OIN) Workshop in Nashville, TN

Registration open for Next OIN Workshop in Nashville, TN
Workshop Offers Hands-on Training in SDN, Science DMZ, DTNs, perfSONAR, and Science Engagement
December 13 – 14, 2016
Vanderbilt University
Nashville, TN

Limited seats available, No registration fee!!
Register here: http://oinworkshop.com/3/custom_form.htm

ESnet, Indiana University, Internet2, the Southern Crossroads (SoX) Regional Network, and Vanderbilt University are hosting the next Operating Innovative Networks (OIN) workshop on the campus of Indiana University on December 13-14 2016 with no registration fee. The series is designed to help lab and campus network engineers deploy next-gen research networks that can effectively support data-intensive science.

The workshop will consist of 2-days of presentation material along with “hands-on” exercises for building and deploying Science DMZs, Software Defined Networks, perfSONAR, Data Transfer Nodes, and Science Engagement. The content will be particularly useful for NSF Campus Cyberinfrastructure awardees that are being funded to upgrade their networks with these technologies, or those looking to prepare for the current CC* solicitation. By the end of the event, attendees will have a better understanding of the requirements for supporting scientific use of the network, architectural strategies that can simplify these interactions, and knowledge of tools that can mitigate problems users may encounter.

For complete information on the program, location and registration details, visit: http://www.oinworkshop.com/3/miscellaneous4.htm

There are a limited number of spots for this workshop, and travel grants are not available. Registration will close when the number of registration slots has been exhausted.

Questions about this workshop can be directed to: oin-workshop@grnoc.iu.edu

Scaling the Research Network

Phase 1: The Original NSF­ funded Research Network

The current implementation of the Penn State Research Network consists of a central network core and edge (Brocade 6740) switches which were paid for by National Science Foundation CC­NIE Program (NSF 12­541). To ascertain the extent of the data movement problem, network research flows were monitored on the existing network and locations were identified where the largest data movements were occurring (e.g. from national atmospheric and environmental data sources to Walker Building, and from Huck Institute buildings to the Computer Building Data Center). Edge switches were added to those buildings in an effort to address the lion’s share of the research data movement. This had the inherent effect of making utilization of the Research Network contingent upon the location of the researcher.

Phase 2: Scaling the Research Network

With a desire to remedy the location dependence of the existing Research Network, we have developed the following plan to scale out the network and make it more accessible to all researchers with large data sets.

  1. Premium (Available Now): At the premium level, a researcher, department, or College could purchase additional Brocade 6740 switches to expand the network out to their “big data” location. This option is the most similar to an edge connection of the existing Research Network. A group considering this should meet with our Engagement and Implementation teams to discuss the specifications for the device and any physical or geographic limitations. This 20­Gb/s option provides two (2), 10­Gb/s connections to the Research Network Core and up to 48, 10­Gb/s or 1­Gb/s connections to computer and equipment (these can be “mixed and matched”). This option has the same advanced switching capability that the original Research Network locations have.
  2. Data Center (Available Now): Researchers with equipment already in a Data Center (either the Computer Building Data Center or the forthcoming Data Center on Tower Road) are encouraged to connect to the Penn State Research Network via the RN aggregation switches in those Data Centers at 10­Gb/s. This will also provide that researcher with direct connections to ICS­ACI compute clusters and resources located in those Data Centers. Provisions can be made for those connections to comply with different levels of Federal and/or granting agency requirements. This option also includes the above mentioned advanced switching capability.
  3. Ethernet Fabric (Available Fall 2016:Testing complete, In trials):  Another high­speed option consists of a 10­Gb/s Ethernet Fabric Switch. This option provides one (1), 10­Gb/s connection from the switch to the Research Network, an additional 10­Gb/s fiber edge port, and either 24 or 48 1­Gb/s connections to individual research workstations or instruments. This option will provide faster access to other points on the Research Network including the ICS­ACI equipment and reduce network congestion on a department or College’s firewall and local area network (LAN). The existing building wiring plant should suffice to allow for 1­Gb/s connections over Category 6/5e, copper Ethernet connections and wall jacks. We are investigating the design of a Federal/granting agency compliant solution on a switch by switch basis. Again, this option should be coordinated with our Engagement and Implementation teams to assure seamless integration into the Research Network.
  4. Compliance Port (Proof of Concept):  At base level of connectivity, we can provision an individual “research or compliance port” on an existing ITS managed, converged network switch. Using the capabilities of these switches, a wall jack network connection can be “virtualized” as a connection on the Research Network. This will be the least expensive solution. It is unclear at audit level whether the virtual port can be made Federal/granting agency compliant. Further investigation is needed. As with the above solution, this will provide a single, 1­Gb/s connection to the Research Network.

Enhancements to Globus login mechanism greatly simplify access and use of the service

From the Globus team:
On February 13th, 2016, we’re changing the way you access the Globus service, allowing you to log into Globus without having to create a Globus username and password. As previously announced, the Globus service will be unavailable for a few hours while we complete this major upgrade. When service resumes, you may select your institution from a list on the Globus login page and use your institutional username and password to log into Globus – if you typically log in with a Globus username and password, you can continue to do so by selecting “Globus ID” from this list. More information about this change is available on our blog. Please contact support@globus.org if you have any questions or concerns.

Fossil Research Requires Modern Networks

An article by our ITS colleague Julie Eble describes the immediate benefit that Professor Tim Ryan received by putting his new X-ray CT (X-ray computed tomography) device on the Penn State Research Network:

Quantitative X-ray imaging facility is moving big bytes across the network

The article points out several things that are worth mentioning. First, it describes a general trend that as instruments get more sensitive or produce higher resolution images, they generate larger data sets. Those data sets become harder to analyze on a laptop or standalone desktop and moving them to more robust computational platforms (like the ICS-ACI High Performance Computing Clusters) becomes increasingly necessary. The ability to create, collect, and process data is outpacing the ability of traditional academic networks to move it in real-time or near real-time.

Another implication is that by using the Research Network to transfer the data set(s) to ICS-ACI storage, the data can be analyzed as a whole rather than in parts (which are discarded) as Professor Ryan points out. Data stored on ACI storage can be processed, filtered, and analyzed on ACI computational clusters with the results visualized on the researcher’s workstation/laptop. This for some is a new way to work, but as the article states, greatly speeds up the data analysis process.

Finally, implicit in Mr. Canich’s comments is that this research data is no longer using the College of Earth and Mineral Science’s academic network, increasing the time for that network routing, firewall, and switch equipment to remain viable. In financial terms, lengthening the life-cycle of the equipment, decreases the amount of money spent over time to keep that equipment viable and the network secure. Just don’t tell the Dean…

DDOS mitigation with sFlow

Here is a link to a video describing the demo I gave at today’s research network meeting: This SDN application is built to detect DDOS attacks from router sflow data and to mitigate them with OpenFlow rules in near real-time.

https://psu.box.com/sflow-sdn-flood-protect

sFlow DDOS mitigation

 

Router:  Brocade MLXe – Hybrid port OpenFlow – Brocade MLX Series core routers deliver unprecedented scale and performance, high reliability, and operational efficiency for the most demanding service provider and enterprise networks. Built on a programmable architecture with high-density 100 Gigabit Ethernet (GbE), 40 GbE, and 10 GbE routing, these routers meet massive bandwidth demands, while maximizing ROI. Leading OpenFlow 1.3 scale in hybrid port mode provides a seamless transition to SDN for increased network agility and programmatic control.

 

Data: sFlow® is an industry standard technology for monitoring high speed switched networks. It gives complete visibility into the use of networks enabling performance optimization, accounting/billing for usage, and defense against security threats.  sFlow.org drives the widespread adoption of sFlow by end users, network equipment and software vendors.

sFlow.com is an InMon Corp. business unit using Software Defined Networking (SDN) and traffic analytics to build performance optimization solutions for cloud data centers.

The trend towards cloud computing, virtualization, and software defined networking technologies is being driven by the promise of flexibility and controllability which will allow a network to automatically respond to changing demand. To deliver on this promise, scalable, real-time measurements and analytics are essential. sFlow.com delivers automated cloud optimization solutions that leverage InMon’s pervasive sFlow measurement technology and partnerships withvendors. These solutions ensure agile, efficient and robust delivery of services, allowing customers to fully realize the benefits of their cloud investment.

InMon invented the industry standard sFlow measurement technology and freely licenses sFlow to over 40 leading vendors, including: A10, Alcatel-Lucent, Arista, Brocade, Cisco, Cumulus, Dell, Edge-Core, Extreme, F5, Hewlett-Packard, Hitachi, Huawei, IBM, Juniper, Mellanox, NEC, Pica8, Quanta, ZTE, and ZyXEL.

InMon is an active participant in the open source community, contributing to a range of projects, including: sFlow.org, Host sFlow, and Open vSwitch.

 

SDN controller: sFlow-RT™ incorporates InMon’s asynchronous sFlow analytics engine (patent pending), delivering real-time visibility in Software Defined Networking (SDN) stacks and enabling new classes of performance aware SDN application such as load balancing and DDoS protection.

The Core of the Research Network is racked

Yesterday, a group from TNS and Data Centers, mounted and racked the core of the Research Network

USB2:

USB2 Research Network Core

USB2:

CompBldg Research Network Core

Hello Researchers!

Its coming.  The research network is coming!

“…The significant expansion of the network being undertaken in this
project will make several hundred 10 Gbps ports available in
laboratories and offices across two major campuses of Penn State. This
expansion is accomplished by deploying 48-port 10 Gbps Ethernet
switches in 12 separate buildings with highest concentration of faculty
who rely on large computational and data resources for advanced
teaching and research. Each of these 12 switches is slated to have two
10 Gbps uplinks, one to each core router for redundancy. Both core
routers will have 10 Gbps and 100 Gbps ports for external connectivity.
The overall goal of the project is to provide at least a ten-fold increase
in end-to-end network connectivity in labs and offices, making
sustained 10 Gbps connections ubiquitous between faculty laboratories,
data-generating instruments, select classrooms, and local and national
computational and data resources. ” – #1245980, PI: Agarwala,
PSU, “CC-NIE Network Infrastructure: Accelerating the
Build-out of a Dedicated Network for Education and
Research in Big Data Science and Engineering”

Source: http://www.nsf.gov/attachments/126052/public/CCNIE_Dec2012_ACCI.pdf

Tags

Brocade CC-NIE Data Transfer Node Engagement Globus Research Network Science DMZ SDN sFlow sFlow-RT SDN Controller Uncategorized

Categories

Brocade CC-NIE Data Transfer Node Engagement Globus Research Network Science DMZ SDN sFlow sFlow-RT SDN Controller Uncategorized
 Visit the Pennsylvania State University Home Page
Copyright 2025 © The Pennsylvania State University Privacy Non-Discrimination Equal Opportunity Accessibility Legal