Hey Supercomputer, Pause and Smell the Roses! Enjoy these Fall Colors!

I received an email on my Smart Phone  a while ago from Herb Schultz (Marketing Manager, IBM Systems and Technology Group for Technical Computing and Analytics) with the agenda for IBM’s Technical Computing IT Analyst meeting the following week in NYC. I was having lunch then at one of my favorite restaurants in Duchess County, New York – McKinney and Doyle.  While munching some delectable crispy calamari and realizing that the agenda covered Cognitive Computing, a train of thoughts sprouted in my mind. Was IBM planning to brief us on supercomputers that can smell, see, hear, touch and taste?

Roses and Fall

Savoring that Crispy, Crunchy Calamari!

I was told once by a gourmand that fine food must appeal to all our five basic senses. As our waiter came from behind me and brought the food, I first smelled a pleasing aroma. Then, the calamari served over citrus coleslaw with sweet and sour chili sauce was a gorgeous sight. Each calamari piece that I touched was perfect – not soggy or oily. When I placed the warm, crispy and coarse calamari on my tongue, it produced a delightful tingle. I then began slowly munching each piece; enjoying the gentle rhythmic crackling sound with every crunch while simultaneously savoring the mouthwatering taste.  I thought – Is IBM going to tell me that all these fine sensations will soon be replaced by a computer? Being a foodie, that prospect was disappointing! 

Fortunately No! – It’s Also about the Next Evolution of Big Data Analytics!

When I went for the briefing the following week in New York, I realized what IBM’s vision and path to Cognitive Computing  is not illusory, exotic and far out into the future. But it is a natural evolution of a Big Data trend that’s happening today in areas such as healthcare and in other industries that are leveraging Big Data for Insights, Knowledge and Wisdom. In many ways, this vision of Cognitive Computing is similar to the Georgia Tech Cognitive Computing Laboratory vision.

A few weeks later at the IBM Edge Conference, when I was having a dinner (again savoring another delectable appetizer) with Jay Muelhoefer (Worldwide Marketing Executive, IBM Platform and Technical Computing) and his team, I learnt that IBM had won a major competitive deal with an Asian Communications Service Provider (CSP) running a Big Data workload. Jay suggested that it would be good to write up this case study to highlight the use of Big Data in Telecommunications and how IBM improved Customer Service for this CSP.

For the customary Cabot Partners’ fee, I signed up to write this paper – Big Data: Delivering an Agile Infrastructure for Time-Critical Analytics in Telecommunications – which you can download by filling out a simple form.

Supercomputing: It’s more than Just About Absolute Performance

Just like the right balance of the five senses satiates and heightens an individual’s dining experience, to maximize business value, supercomputers must possess five critical elements – performance, reliability, manageability, efficiency and serviceability. This requires – as illustrated by the Telecom example – a combination of hardware, software and skilled people all working in tandem to maximize a client’s business value – just like a memorable dining experience requires fine food made with the freshest ingredients, a nice ambiance and great company.

As We Hurtle Towards Supercomputing 2013, Chill! There’s Value in Slowness!

When the world’s fastest supercomputers are unveiled next month in Denver, Colorado, there will undoubtedly be much media hoopla around the Top500 list. IT vendors will compete fiercely as they always do.  They will incessantly brag about their position in this list and how fast their systems are in the Top500 benchmark.  But amidst all this noise, you should also be cognizant of the other key elements (senses) that must also be measured and highlighted for supercomputers (cognitive computers).

But pause, let’s step back, take a deep breath and realize that according to some philosophers this sensory world is just an illusion – They call it Maya! In a Mayan world, does it matter that computers can smell, see, touch, hear and taste?

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Ratcheting up the “Flops”, the US regains Supercomputing Leadership. Keep the Innovation Flame on!

This week’s big supercomputing story – It’s Red, White and Blue and … Green too!

This past Monday (June/17), as I was sitting outside; reading and enjoying the nice sunny weather on the East Coast, I received an email alert that delighted me and put me on a joyous reflective state. At ISC 2012, the Top500 list of supercomputers worldwide firmly established that, after a span of almost three years, the United States has regained that envious and prestigious floating point performance leadership position in supercomputing or High Performance Computing (#HPC) – wresting this away from other world-class manufacturers.

Blue is Green Too!

The fastest supercomputer in the world is the IBM BlueGene/Q (Sequoia), installed at the Lawrence Livermore National Laboratory, and achieved an impressive 16.32 petaflops/s on the Linpack benchmark using 1,572,864 cores. The BlueGene/Q is also the top system in the Graph500 list, which ranks supercomputers by a data intensive benchmark that mirrors workloads common in graph applications including social networks, cyber security, and medical informatics. And the BlueGene/Q is also the Greenest supercomputer according to the Green500 list that ranks supercomputers by energy-efficiency benchmarks! Moreover, the fastest supercomputer in Europe is the SuperMUC, an IBM iDataplex system installed at the Leibniz-Rechenzentrum in Germany and cooled by warm water.

You can get more details from the Top500, the Green500, and the Graph500 lists of supercomputers. But if you want to truly get a detailed (the what, how and why) perspective on the innovations that underpin these spectacular results, please read our recent papers on the BlueGene/Q and the iDataPlex:

  1. IBM Blue Gene/Q: The Most Energy Efficient Green Solution for High Performance Computing
  2. Beyond PetaFlops: Scalable, Energy Efficient IBM System x iDataPlex dx360 M4 powered by Intel Xeon processor E5-2600 Product Family
  3.  The IBM System x iDataPlex dx360 M4: Superior Energy Efficiency and Total Cost of Ownership for Petascale Technical Computing

The Eternal Flame of Innovation

This is a great testament to US innovation in the computer industry. Going forward, one fundamental question/challenge in supercomputer design is; how can we keep heat away and cool these systems to run reliably and efficient as we scale up performance?  For this, innovations in cooling technologies, low-power processors, and the rest of the technologies must all come together to build that gigantic jigsaw puzzle – the exascale system! The center of gravity of this pursuit, while historically firmly entrenched in the US since the dawn of the information age, seems to be lately seesawing between the US and Asia. Today, it is in the US. One question is how can the US reinforce and sustain this edge and arrest this seesawing jigsaw?

However, a bigger question is how can the US keep the flame and heat on the escalating tussle for an edge in innovation and on the seesawing race for leadership in today’s global knowledge economy? Today (June/21/2012) this heat is literally on.  It is not only the longest day in the northern hemisphere but it is also the hottest day here in Connecticut! The sprint towards exascale is just one proxy for this larger battle.

To win, we must flex our neurons. For this we need relentless focus and continuing investment in education – particularly in math, science, and language. Our teachers are our personal trainers and the classroom is the gym. But beyond, traditional classroom education, we must experiment, constantly learn on the job and not be afraid to make and learn from Brilliant Mistakes. Moving on and learning from these Brilliant Flops (a.k.a. Mistakes) is of greater benefit to innovation than merely ratcheting up the supercomputing Flops! May the Olympic Torch of Innovation continue to shine on the United States!

Posted in Uncategorized | Leave a comment

The Taming of Data – On the Value Train from Insights to Knowledge to Wisdom to perhaps Happiness?

 

I recently attended the IBM #SmarterAnalytics Summit in New York City that focused on #Analytics and #Optimization. The sessions and the client panel in particular were superb and enlightening. Beyond, the typical discussions on technology, the IBM client panel repeatedly emphasized that organizational and cultural changes were critical to properly implement and integrate #Analytics and #Optimization as core business process.

This re-sparked a train of thoughts in my mind. I even got to test these thoughts a bit later at the evening reception. On my train ride back home, this train of thoughts on how to tame this avalanche of data for mankind’s (including corporations) benefit continued to escalate. I thought I should transcribe this train of thoughts quickly before it crashes and bursts into some forgotten cloud! For this my Cloud Mobile (iPhone) with Speech Recognition Software (Dragon) came to my rescue.

On Data, Words and Deeds, and Ephemeral Social Media

It’s well recognized by IT industry experts that data by itself has little value. It’s what you do with it that generates the value. It reminds me of Lech Walesa’s quote The supply of words in the world market is plentiful but the demand is falling. Let deeds follow words now.”  Or simply put, in an anonymous quote, “talk is cheap because supply exceeds demand”.

I am not suggesting that we clamp down on the supply of words. That would be tantamount to curtailing free speech. We must take a thoughtful approach and critically examine the hype around #Bigdata – primarily perpetuated by the IT industry for which, as an analyst, I am also guilty.

Also guilty – contributing to the excess supply of data – is the recent spate of growth of “unstructured” data: images, video, voice, pictures and others. Probably because many believe that “a picture is worth a thousand words”. And a video even more! Every time I hear this oft used cliché, I think REALLY? WHY? Why are we creating all these quantities of image/video data and spending our precious resources (our time) doing so? More importantly, why are we so enamored with transmitting this data to others?

Yes, new social media and the underlying technologies give each and every individual enormous capability for creative expression and even contribute to the overthrow of oppressive regimes e.g. the Arab Spring. But aren’t we collectively trampling on another form of creative expression – the thoughtful reflective kind by drowning each other in all this data? Or aren’t we being distracted by all these images that like fast foods fill us up to sated exhaustion but have very little nutritional value?

But Some Words do Matter. Some Words are better than Exabytes of Pictures (or Words) and they Persist!

Here are some poignant examples. This is what the great contemporary Scandinavian poet, Tomas Tranströmer (translated by Robin Robertson), wrote about words:

FROM MARCH 1979

Sick of those who come with words, words but no language,

I make my way to the snow-covered island.

Wilderness has no words. The unwritten pages

Stretch out in all directions. 

I come across this line of deer-slots in the snow: a language,

language without words.

And the great 20th century Mexican poet, Octavio Paz (translated by J. M. Cohen), wrote:

CERTAINTY

If the white light of this lamp

is real, and real

the hand that writes,

are the eyes real

that look at what I write?

 

One word follows another.

What I saw vanishes.

I know that I am alive,

and living between parentheses.

Distinctive Numbers – God’s Equation Then and Now – Hey It’s All Just Zeros and Ones

Just like profound and wise words, there are some distinctive numbers (data) that also matter: zero and the imaginary number i and those irrationals Pi and e. And then there’s Euler’s God’s Equation of centuries back:  e ^ i2π = 1. Thus, the “simplest” and most fundamental of all numbers (Numero Uno) is incredibly complex, made up of irrational, transcendent constants that extend to infinity. Now, the more contemporary version of God’s Equation (circa 2007) is the fourth album by the Norwegian progressive metal band Pagan’s Mind and contains video clips! But hey, today it’s all just digital data which are, at the end of the day, zeros and ones – the two most fundamental numbers. So why are we all making such a hoopla!

Because we must traverse that Divine Manifold from Data to Information to Insights to Knowledge to Wisdom and perhaps Happiness

Data is plentiful (all the data generated today can’t even be stored!), and left untamed is bound to be catastrophic. So we (corporations included) must harmonize all our assets and capabilities (people, process, data, technology, and culture) to navigate through this data onslaught and traverse the Value Train with the help of yet another God’s Equation: This new equation must transform Data to Information to Insights to Knowledge to Wisdom. One recent noteworthy technology asset for this journey to wisdom could be #IBMWatson.

That great wise soul, Mahatma Gandhi, once said: “Happiness is when what you think, what you say, and what you do are in harmony.” So the Happy (and Wise) enterprises of the future in our data-driven world will be those that can act and culturally transform themselves through change and a complete re-think of strategy – just like the IBM client panel repeatedly emphasized – those were divine words! And they matter! Act on them! The customer is always right!

Posted in Uncategorized | Leave a comment

The Strategic Importance of Technical Computing Software

Beyond sticking processors together, Sticky Technical Computing and Cloud Software can help organizations unlock greater business value through automated integration of Technical Computing assets – Systems and Applications Software. 

Most mornings when I am in Connecticut and the weather is tolerable, I usually go for a jog or walk in my neighborhood park in the Connecticut Sticks. One recent crisp sunny fall morning, as I was making my usual rounds, I got an email alert indicating that IBM had closed its acquisition of Algorithmics – a Financial Risk Analysis Software Company and this would be integrated into the Business Analytics division of IBM. This along with a recent (at that time) announcement of IBM’s planned acquisition of Platform Computing (www.ibm.com/deepcomputing) sparked a train of thoughts that stuck with me through the holidays and through my to-and-fro travel of over 15,000 miles to India and back in January 2012. Today is February 25, 2012 – another fine day in Connecticut and I just want to finish a gentle jog of three miles but made a personal commitment that I would finish and post this blog today. So here it is before I go away to the Sticks!

Those of you who have followed High Performance Computing (HPC) and Technical Computing through the past few decades as I have may appreciate these ruminations more. But these are not solely HPC thoughts. They are, I believe, indicators of where value is migrating throughout the IT industry and how solution providers must position themselves to maximize their value capture.

Summarizing Personal Observations on Technical Computing Trends in the last Three Decades – The Applications View 

My first exposure to HPC /Technical Computing was as a Mechanical Engineering senior at the Indian Institute of Technology, Madras in 1980-1981. All students were required to do a project in their last two semesters. The project could be done individually or in groups. Projects required either significant laboratory work (usually in groups) or significant theoretical/computational analysis (usually done individually). Never interested in laboratory work, I decided to work on a computational analysis project in alternate energy. Those were the days of the second major oil crisis. So this was a hot topic!

Simply put, the project was to model the flame propagation in a hybrid fuel (ethanol and gasoline) internal combustion engine using a simple one dimensional (radial) finite-difference model to study this chemically reacting flow over a range of concentration ratios (ethanol/gasoline: air) and determine the optimal concentration ratio to maximize engine efficiency . By using the computed flame velocity, it was possible to algebraically predict the engine efficiency under typical operating conditions. We used an IBM 370 system and those days (1980-1981) and these simulations would run in batch mode in the night using punched cards as input. It took an entire semester (about four months) to finish this highly manual computing task for several reasons:

  1. First, I could run only one job in the night; physically going to the computer center, punching the data deck and the associated job control statements and then looking at the printed output the following morning to see if the job ran to completion. This took many attempts as inadvertent input errors could not be detected till the next morning.
  2. Secondly, the computing resources and performance were severely limited. When the job actually began running, often it would not run to completion in the first attempt and would be held in quiescent (wait) mode as the system was processing other higher priority work. When the computing resources became available again, the quiescent job would be processed and this would continue multiple times until the simulation terminated normally. This back and forth often took several days.
  3. Then, we had to verify that the results made engineering sense. This was again a very cumbersome process as visualization tools were still in their infancy and so the entire process of interpreting the results was very manual and time consuming.
  4. Finally, to determine the optimal concentration ratio to maximize engine efficiency, it was necessary to repeat steps 1-3 over a range of concentration rations.

By that time, the semester ended, and I was ready to call it quits. But I still had to type the project report. That was another ordeal. We didn’t have sophisticated word processors that could type Greeks and equations, create tables, and embed graphs and figures. So this took more time and consumed about half my summer vacation before I graduated in time to receive my Bachelor’s degree. But in retrospect, this drudgery was well worth it.

It makes me constantly appreciate the significant strides made by the IT industry as a whole – dramatically improving the productivity of engineers, scientists, analysts, and other professionals.  And innovations in software, particularly applications and middleware have had the most profound impact. 

So where are we today in 2012? The fundamental equations of fluid dynamics are still the same but applications benefiting industry and mankind are wide and diverse (for those of you who are mathematically inclined, please see this excellent 1 hour video on the nature and value of computational fluid dynamics (CFD) -http://www.youtube.com/watch?v=LSxqpaCCPvY ).

We also have yet another oil crisis looming ominously. There’s still an urgent business and societal need to explore the viability and efficiency of alternate fuels like ethanol. It’s still a fertile area for R&D. And much of this R&D entails solving the equations of multi-component chemically reacting, transient three dimensional fluid flows in complex geometries. This may sound insurmountably complex computationally.

But in reality, there have been many technical advances that have helped reduce some of the complexity.

  1. The continued exponential improvement in computer performance – at least a billion fold or more today over 1981 levels – enables timely calculation.
  2. Many computational fluid dynamics (CFD) techniques are sufficiently mature and in fact there are commercial applications such as ANSYS FLUENT that do an excellent job of modeling the complex physics and come with very sophisticated pre and post processing capabilities to improve the engineer’s productivity.
  3. These CFD applications can leverage today’s prevalent Technical Computing hardware architecture – clustered multicore systems – and scale very well.
  4. Finally, the emergence of centralized cloud computing (http://www.cabotpartners.com/Downloads/HPC_Cloud_Engineering_June_2011.pdf ) can dramatically improve the economics of computation and reduce entry barriers for small and medium businesses.     

One Key Technical Computing Challenge in the Horizon

Today my undergraduate (1981) chemically reacting flow problem can be fully automated and run on a laptop in minutes – perhaps even an iPad. And this would produce a “good” concentration ratio. But a one-dimensional model may not truly reflect the actual operating conditions. For this we would need today’s CFD three dimensional transient capabilities that could run economically on a standard Technical Computing cluster and produce a more “realistic” result. With integrated pre and post processing, engineers’ productivity would be substantially enhanced. This is possible today.

But what if a company wants to concurrently run several of these simulations and perhaps share the results with a broader engineering team who may wish to couple this engine operating information to the drive-chain through the crank shaft using kinematics and then using computational structural dynamics and exterior vehicle aerodynamics model the automobile (Chassis, body, engine, etc.) as a complete system to predict system behavior under typical operating conditions?  Let’s further assume that crashworthiness and occupant safety analyses are also required.

This system-wide engineering analysis is typically a collaborative and iterative process and requires the use of several applications that must be integrated in a workflow producing and sharing data. Much of this today is manual and is one of today’s major Technical Computing challenge not just in the manufacturing industry but across most industries that use Technical Computing and leverage data. This is where middleware will provide the “glue” and believe me it will stick if it works! And work it will! The Technical Computing provider ecosystem will head in this direction.  

Circling Back to IBM’s Acquisition of Algorithmics and Platform Computing

With the recent Algorthmics and Platform acquisitions, IBM has recognized the strategic importance of software and middleware to increase revenues and margins in Technical Computing; not just for IBM but also for value added resellers worldwide who could develop higher margin services in implementation and customization based on these strategic software assets. IBM and its application software partners can give these channels a significant competitive advantage to expand reach and penetration with small and medium businesses that are increasingly using Technical Computing. When coupled with other middleware such as GPFS and Tivoli Storage Manager and with the anticipated growth of private clouds for Technical Computing, expect IBM’s ecosystem to enhance its value capture. And expect clients to achieve faster time to value!

Posted in Uncategorized | Leave a comment

No Apology for High Performance Computing (HPC)

A few months back, at one of my regular monthly CTO club gatherings here in Connecticut, an articulate speaker discussed the top three IT trends that are fundamentally poised to transform businesses and society at large. The speaker eloquently discussed the following three trends:

  • Big Data and Analytics
  • Cloud Computing
  • Mobile Computing

I do agree that these are indeed the top three IT trends in the near future – each at differing stages in adoption, maturity and growth. But these are not just independent trends. In fact, they are overlapping reinforcing trends in today’s interconnected world.

However, while discussing big data and analytics, the speaker made it a point to exclude HPC as an exotic niche area largely of interest to and (implying that it is) restricted to scientists and engineers and other “non-mainstream” analysts who demand “thousands” of processors for their esoteric work in such diverse fields as proteomics, weather/climate prediction, and other scientific endeavors. This immediately made me raise my hand and object to such ill-advised pigeon-holing of HPC practitioners – architects, designers, software engineers, mathematicians, scientists, and engineers.

I am guilty of being an HPC bigot. I think these practitioners are some of the most pioneering and innovative folk in the global IT community. I indicated to the speaker (and the audience) that because of the pioneering and path breaking pursuits of the HPC community who are constantly pushing the envelope in IT, the IT community at large has benefited from such mainstream (today) mega IT innovations including Open Source, Cluster/Grid computing, and in fact even the Internet. Many of today’s mainstream Internet technologies emanated from CERN and NCSA – both organizations that continue to push the envelope in HPC today. Even modern day data centers with large clusters and farms of x86 and other industry standard processors owe their meteoric rise to the tireless efforts of HPC practitioners. As early adopters, these HPC practitioners painstakingly devoted their collective energies to building, deploying, and using these early HPC cluster and parallel systems including servers, storage, networks, the software stack and applications – constantly improving their reliability and ease of use. In fact, these systems power most of today’s businesses and organizations globally whether in the cloud or in some secret basement. Big data analytics, cloud computing, and even mobile/social computing (FaceBook and Twitter have gigantic data centers) are trends that sit on top of the shoulders of the HPC community!

By IT standards, the HPC community is relatively small – about 15,000 or so practitioners attend the annual Supercomputing event. This year’s event is in Seattle and starts on November 12. But HPC practitioners have very broad shoulders and with very keen and incisive minds and a passionate demeanor not unlike pure mathematicians. Godfrey H. Hardy – a famous 20th century British mathematician – wrote the Mathematician’s Apology – defending the arcane and esoteric art and science of pure mathematics. But we as HPC practitioners need no such Apology! We refuse to be castigated as irrelevant to IT and big IT trends. We are proud to practice our art, science, and engineering. And we have the grit, muscle and determination to continue to ride in front of big IT trends!

I have rambled enough! I wanted to get this “off my chest” over these last few months. But with my dawn-to-dusk day job of thinking, analyzing, writing and creating content on big IT trends for my clients; and with my family and personal commitments, I have had little time till this afternoon. So I decided to blog before getting bogged down with yet another commitment. It’s therapeutic for me to blog about the importance and relevance of HPC for mainstream IT. I know I can write a tome on this subject. But lest my tome goes with me unwritten in a tomb, an unapologetic blog will do for now.

By the way, G. H. Hardy’s Apology – an all-time favorite tome of mine – is not really an apology. It’s one passionate story explaining what pure mathematicians do and why they do it. We need to write such a tome for HPC to educate the broader and vaster IT community. But for now this unapologetic blog will do. Enjoy. It’s dusk in Connecticut. The pen must come off the paper. Or should I say the finger off the keyboard? Adios.

Posted in Uncategorized | 3 Comments

Why engineering needs high performance cloud solutions

The design and engineering function in companies is in crisis. The engineering community must deliver designs better, faster and cheaper; design high quality products with lesser designers and across a distributed ecosystem of partners; and respond to increased CIO cost control of engineering IT.

And they must do all of this in an operational reality of siloed data centers tied to projects and locations; limited or poor operational insight; underutilized resources still missing peak demands; and designers tied to local desk side workstations and with limited collaboration.

The way to overcome these issues is to transform siloed environments into shared engineering clouds – private and private-hosted initially and transitioning to public over time. To achieve this, engineering functions require interactive and batch remote access; shared and centralized engineering IT; and an integrated business and technical environment.

This will unlocks designer skills from a location; provide greater access to compute and storage resources; align resource to project priorities; and realize improved operational efficiency and competitive cost savings.

Posted in Uncategorized | 1 Comment