Abstract
The broad infrastructure considered necessary to capitalize on spectacular advances in information technology has been termed cyberinfrastructure. Cyberinfrastructure integrates hardware for computing, data and networks, digitally-enabled sensors, observatories and experimental facilities; and an interoperable suite of software and middleware services and tools. Scientists and engineers need access to new information technology capabilities; such as circulated wired and wireless observing network complexes; sophisticated simulation tools that permits exploration of phenomena which can never be pragmatic or replicated by experiment. Computation offers new models of performance and modes of scientific discovery that deeply extend the limited choice of models that can be shaped with mathematics alone, for example, chaotic behavior. Smaller amount of researchers working at the frontiers of knowledge can carry out their work without cyberinfrastructure of one form or another. While hardware performance has been exponentially – with gate density doubling every 18 months; storage capacity every 12 months; and network capacity every 9 months – it has become clear that more and more capable hardware is not the only prerequisite for computation-enabled discovery.
This Plenary Talk would concentrate; on Cyberinfrastructure essential to 21st century advances in Science and Engineering education and researches.
Keywords: Cyberinfrastructure, multiscientific, MEMS, knowledge frontiers.
Introduction
Cyberinfrastructure is the next vision for 21st century perspective and discovery. The term launched by the US National Scientific Foundation [NSF] and drawn a draft in July 2006 for the US President. Today’s academic society is well aware of this new infrastructure; though that needed to be trapped to survive in all landing technology and science. The extensive infrastructure requires capitalizing on vivid advances in information technology; has been termed cyberinfrastructure. Cyberinfrastructure integrates hardware for computing; data and networks; digitally-enabled sensors; observatories and experimental facilities; an interoperable suite of software; middleware services and tools. Investments in interdisciplinary teams and cyberinfrastructure professionals with expertise in algorithm development; system operations and applications improvement; are also essential to take advantage of the full power of cyberinfrastructure to create, disseminate, and preserve scientific data, information, knowledge, and controls. The NSF vision is very clear; Cyberinfrastructure will play a leadership role in the development and support essentials to 21st century advances in science and engineering research and education. How about we; the others? Are we ready to move?
The global mission to this: are –
– to develop a human-centered Cyberinfrastructure (CI) that is driven by science and engineering research and education opportunities
– provide the science and engineering communities with access to world-class CI tools and services focused on high performance computing; data, data-analysis and visualization; cyber-services and virtual organizations; learning and workforce intimacy and development
– broaden cyberinfrastructure participation – an agent strategy
– sustainability – secure, efficient, reliable, accessible, usable, interoperable
– create a stable CI to contribute to the agency’s statutory mission
The Cyberinfrastructure
While US government and the Science foundation draws the leadership on this new infrastructure; we the scientist and engineers are still in the boundary of their independent discipline that is closed and sealed. This talk would not go into that part in detail; though want to illustrate the insight that is appropriate and implemental in reality by sharing the multiscientific knowledge. We modify our sense in three dimensional paths; the one spots on so-called customary scientific move toward in searching, experimenting, and the result releasing; the other one is the quiet or deliberate rock-hard engineering; the third is the supersonic and rapid money attached browser-based Internet technology. We talk about at this time; the essentials of cyberinfrastructures and the deployment of tools that we invented to suite and capable for this mission.
The Essentials of Cybeinfrastructure
The landscape would be addressing the most computationally challenging problems with access to a world-class high performance computing environment and the exchange of service- agreements with all entities, wherever possible. That is a superior policy issue to define the essentials though that is a good start; with funds and other necessary resources. We call for contemporary tools in a state-of-the-art High Performance Computing [HPC]. HPC capabilities facilitate the modeling of life-cycle-challenges that capture interdependencies transversely varied disciplines and multiple scales to create globally competitive manufacturing enterprise systems. In short; sophisticated numerical simulations permit scientists and engineers to perform a wide range of in silico experiments that would otherwise are too difficult and expensive; sometime rather impossible to perform in the Lab. It is also essential to the success of research conducted with sophisticated experimental tools that are not necessarily to “re-invent”; for example – without the waveforms produced by numerical simulation of black-hole collisions and other astrophysical events, gravitational waves signals cannot be extracted from the data produced by the Laser Interferometer Gravitational Wave Observatory.
Science and Engineering research and education need the HPC tools that have a direct bearing on the Nation’s competitiveness. HPC investment has a long-term impact on national need, such as bioengineering, critical infrastructure protection (for example; the electric power grid), healthcare, manufacturing, nanotechnology, energy, and transportation, etc. [1].
The academy and the world in Cyberinfrastructure
At this level, the concern is with the connectivity of the nodes, from a system point of view, rather than their position in space. The topic of interest is the behavior or systems as a network that is the hardware and the software realization of the arrays [2]. Recent advances in electronic circuit miniaturization and MEMS have led to the creation of small sensor nodes that integrates several sensors (and kind of), a CPU , memory and a wireless transceiver [3]. A collection of these sensor nodes forms a sensor network; which is easily deployable to provide a high degree of visibility into real-world physical processors as they happen, thus benefit a variety of applications including environmental monitoring, surveillance, security, and target tracking. The base station computer is connected, for example, to a database back-end system via satellite link. Sensor nodes sample their sensors about once a minute or etc and their readings directly to the database at its back-end system. Kim & Yoon [2000] introduced at Mpala Research Center, in Kenya [4] for solar monitoring individual large animals.
An added predictable case in point of this process is the sensor-grid-computing. To illustrate; the process is to simply achieve sensor-grid-computing for global opportunity in connecting, interface sensors and sensor networking to the grid; and let all computational operations or everyday jobs fix in there. What we need in here is the fast communication links between the sensor-actuator-nodes and the grid. The meaning of this is to establish; in a way, a centralized sensor-grid computing where we execute centralized sensor-grid-architecture. We may face some bitterly pushing-in scenario in here at the initial levels where it leads to excessive communications and so on, etc.
We come up; with a concept that is being implemented; a fusion but spread-thread-system that allows in the fashion of an open architecture format; where a fully or semi-matured module could fit in and learn to perform within the cyberinfrastructure. See the communication associations and the scalable configuration with multi-channel-distributed-grid-environment.
Conclusion
This plenary talk focuses the dramatic needs of 21st century scientific and engineering discoveries where we further grab –
– governmental policy that directs the academic and research society to get involved
– continued initiatives of all concerned in accessing, profiling and mirroring the users
– guarantee the global security in transition and transmission of information between the global users
– legal boundaries to access and virtual connectivity
The decision is in the Open Source and within the academic society and the governments.
Thank you all and have a good time in researching and finding “truth like” the global discoveries.
Balan Pillai
Doctor of Science in Technology, M.Sc. (Economics)
Professor
————————— END ——————————————
References
[1] NASA – Files. http://www.nasa.com.
[2] Newman, R [2006]: Smart MEMS & Sensor Systems, published by Imperial College Press, UK, pp. 465 – 503-
[3] The Gridbus Project: http://www.gridbus.org.
[4] Kim, S-H & Yoon, C [2000]: Structural Monitoring System based on sensitivity analysis and neural network, published by Blackwell Publishers, Malden, Mass., USA in Computer-Aided Civil and Infrastructural Engineering, 15, pp. 304 – 315.