‘Big data’ and ‘Tweet’ enters Oxford Dictionary..!!

The Oxford English Dictionary becomes part of the social media technology revolution.


The Oxford English Dictionary has a rule that “a new word needs to be current for ten years before consideration for inclusion”.

Chief Editor John Simpson, who made the announcement in a blog spot that, OED breaks its rule to match the tech savy race and adds the words  ‘Big data’ and ‘Tweet’ to the dictionary.

From quarterly update of the Oxford English Dictionary:

The word “tweet,” appearing both as a noun and a verb, was added to the dictionary.


The word ‘big data’ is also added to the dictionary.


The OED got on board with other tech lingo. The words “crowdsourcing,” “e-reader,” “mouseover,” “stream” and “redirect” “Flash mob,” 3D printer” and “live-blogging” also made their entry in the century old dictionary.

Workshops and Conferences

A Sip of Coffee Script

CoffeeScript is essentially just a syntactic rewrite of JavaScript. The core language itself stays the same, with small semantic enhancements. The syntax is modified, modeled after Python and Ruby.

CoffeeScript compiles down to raw JS and hence outputs clean JavaScript that not only follows best practices but also eminently readable.This means that you don’t have to worry about compatibility down the line.

Under the web development umbrella, there is already a fair share of this ideology. HAML is a new way of writing HTML while SASS does the same for CSS. All of them clean up the structure and syntax of their languages making them more easier to work with and thus boosting our productivity.

Pros and Cons


  • Python style whitespacing
  • Ruby styled lightweight syntax
  • Concise function declarations
  • JSLint approved
  • Class based inheritance

There are, of course, numerous other points including semantic and syntactic enhancements.


  • Slight learning curve involved
  • Deployment, depending on your route, may be a chore
  • You’ll need a basic knowledge of JS for debugging purposes. You can’t directly start here, naturally.

When somebody hears about CoffeeScript the first question that comes to mind is “/should I learn CoffeeScript” and the developer of CoffeeScript answers that question in this way

Jeremy Ashkenas

Yes. CoffeeScript is not an entirely new and strange language. It exists to allow “reasonably solid” JavaScript developers to write the same code they were going to write in the first place, in a more readable and fluent way. The basic idea is to write what you mean, instead of writing within the limits of historical accident. For example, if I want to loop over every item in a list, in CoffeeScript, I can write what I mean:

for item in list
  process item

Whereas in JavaScript, I partially obscure my intention, by writing:

for (var i = 0, l = list.length; i < l; i++) {
  var item = list[i];
Workshops and Conferences

Barcamp Bangalore 2012

Barcamp Bangalore – “where Ideas meet ” the caption says it all.This post is specially to make my frenz envy me cos none of them turned up whoever had agreed to join me… 😉 😛  I attended the Barcamp Bangalore for the first time on 25 August @ SAP Labs. I made to the venue by 9.a.m was feeling so low as I dint know anyone there but after that from the moment I filled the form, till evening 6.15 I had great time learning new techie stuff and making wonderful friends and yea.. got loptop bag too as goodie.. 🙂 🙂

Barcamp is one such event which offers a platform for wide rage of techies to share there experiences regarding any new technology and innovations.One thing that sets BCB apart from other tech events is that the stage is set free for anybody to give there talks but the entire schedule is made on the day of event in First Come First Serve basis so only the early comers get chance to present… 🙂

I was so flattered by the organizers.They kept things going so  well. BCB 2012 introduced two great ideas this time.

#1. Techclash- was a 1 hour slot reserved in BCB12 for a series of short technology demos to bring the smart tech solutions to the limelight.

#2. Electronic Scheduler- An algorithm implemented by one of the BCB 2011 attendee to schedule the sessions. SAP campus helped them so well to display the schedule all around with so many display screens(LCD Projectors, LCD TV screens, even Laptops).

There were 6 tracks running parallel and each sessions were so informative.It was hard to choose which one to attend and I attended following sessions:

  1. A Coffee Script
  2. Google API’s and Deployment Google App Engine
  3. NOSQL – Loosing relationship to gain speed
  4. Techlash
  5. Jactor – Actor based Programming
  6. Jekyll
  7. Bring Clouds Together
  8. Openstack cloud software

I will share sessions details in coming posts.

Cloud Technologies, Workshops and Conferences

Cloud Computing – Part 2 #cloud #virtualization

I had been to Bangalore Barcamp 2012 yesterday and got an opportunity to meet openstack developers.One of them gave a wonderful analogy about cloud computing.I just wanted to share the same with you all.

Suppose that you have a desktop with following configuration.You install an operating system and you can boot only one operating system at time, moreover your operating system would consume only a part of your resource and the rest is just left unused.

Now consider the concept of virtual machine.For the same above configuration here we put a layer(hypervisor) above, such that it totally hides the underlying information as shown below.With the help of this layer you can install any number of operating systems irrespective of the type of OS and you can boot all the operating systems simultaneously too. Not just that, you can also specify what amount of RAM and Hard Disk space you want to allocate for a particular OS.Is it not cool..?? 🙂 Yes it is a super cool feature implemented as VMWare player and Virtual Box.Click here for ubuntu installation using vmware player.

Hypervisors provide the means to logically divide a single, physical server or blade, allowing multiple operating systems to run securely on the same CPU and increase the CPU utilization.Some of the hypervisors in the market are KVM from Redhat , XEN used by Amazon, EXXI from Openstack, Hyper-V from Microsoft.

Where hardware partitioning allows for hardware consolidation, hypervisors allow for flexibility in how the virtual resources are defined and managed, making it a more-often used system consolidation solution.

But now the issue is even in this design we might expect some unused resources in each or the OS and now comes the concept of CLOUD to utilize the remaining resources too.Ah.. Developers grow greedy right.. 😛

In cloud you set up a network of machines with varying configuration like one machine might have 2TB storage capacity and another machine might have 32GB RAM and few might have very less configuration too.What they do is just combine all the resources and when we ask for cloud service they give us a part of the resources whose size depends on the customers need.This process is called spawning of instances and all these accessed via internet.

An instance is nothing but set of resources.Cloud providers spawn the instances whose size are decided just like the way we have various T-shirts sizes(Small, Medium, Large, XL,XXL, etc). and they name it as micro instances,small instance,large instance, etc.Here we also have the concept of images which is nothing but a bootable image which might be any operating system.Hence whenever a customer chooses for particular type of instance(like ubuntu, solaris, fedora, windows,etc) appropriate images is loaded accordingly.

Workshops and Conferences

Cloud Computing – Part 1

Cloud computing.A catchy word is’nt it..?? that simply refers to accessing the applications via Internet instead of being installed in your computer.So the funda is all about remote servers.

Without even being aware of it, all of us most likely have been already using some cloud-enabled programs.Web-based email is a best use case scenario.  If you have, for example, a Hotmail or Gmail email account you’re using a cloud-based program.  These programs store nothing on your computer.  Instead you log onto their servers, enter your credentials and read and send emails.

One good thing about these programs is that they (with a few exceptions) don’t reside on your computer.  That means they aren’t eating up disk space. Also, you don’t have to worry about downloading updates.  The program is always current. Most importantly, though, true cloud apps don’t force you to store your data locally on your computer.  Instead, you save your work on their servers. This allows you to access your work from anywhere anytime just that you have internet connection.

Cloud Service Model

Platform as a Service (PaaS) is the future of the Cloud! In 2011, we got witness many acquisitions and announcements including Heroku by and CloudFoundry by VMware. Infact PaaS space is broadly divided among the .NET, Java and LAMP platforms. Though there is no serious competition to Microsoft Windows Azure in the form of a .NET based PaaS, there is a huge competition among the Java PaaS players including Google App Engine, VMware CloudFoundry, RedHat OpenShift and Heorku. Amazon is also vying for this space through its Elastic Beanstalk offering. Oracle also has announced its Java PaaS. So, Java developers have a wide range of PaaS offerings to choose from. Interestingly, the same set of players is adding support for PHP, Python, Ruby and Node.js.  For example, Heroku has added support for Ruby, Node.js, Clojure, Python and Scala. Same is the case with CloudFoundry which claims that it can run PHP (through AppFog), Ruby, Node.js and Scala! Microsoft also wants the developers to believe that they can run their Java and PHP applications on Windows Azure.

Infrastructure as a service(IaaS), is more of a useful side of cloud computing service for the big companies. They wont be needing to install a copy of windows in each of their nodes or configure severs to do them. In fact, they wont need servers at all. They would be ordering for servers and pre-configured networks online and they will be accessing them online. This implies huge savings for the company for network set-up cost and a wage-loss for the present day network,Amazon Web Services,Cloudo,Free Zoho,, eyeOS.

Software as a service(SaaS),are real pieces of software that you can access directly through the internet, no you don’t need to install anything in your computer. In a few years(strike) months  you will not be ‘installing’ Microsoft Office, Antivirus software, Medial Players  or anything in your computers. You will simply open your browser, go to the cloud service vendor and run the application directly! We have amazing examples for this service like google chrome apps, android & iphone apps,etc

Ah.. Too much theory..Grr.. One last Gyaan

Types of cloud computing

1. Public cloud : Public cloud can be accessed by anyone. Can be said as the other name for cloud computing. Example- Amazon Web Services, Google App-Engine, and Microsoft Azure.

2. Private cloud : Private cloud is exclusively meant for a particular organisation and cannot be accessed by anyone else. Thus it is Data-centre that provides hosted service to limited users. Private clouds are more secure but expensive to public clouds. You have to purchase the storage capacity and services required.

3. Hybrid cloud : Hybrid cloud links both the public and private cloud for example the database is on the private cloud and the applications managed on public. This is an optimal way to be secure at the same time and get maximum resources available. It is considered to be a fault tolerant architecture, since any failure in private cloud services are compensated with those of public cloud services.

4. Community cloud : organizations from a specific community share information on the same cloud managed by themselves or a third party and hosted by service provider.

This was more of a theoretical explanation for the geeks out there I have a technical way rolling in next.. 😉


What is the point with Hadoop…???

Whenever I have a chitchat or formal talk with a BI or Analytic person, the most widely asked question is

what is the point of Hadoop?’.


It is a more fundamental question than ‘what analytic workloads is Hadoop used for’ and really gets to the heart of uncovering why businesses are deploying or considering deploying Apache Hadoop. There are three core roles:

  • Big data storage: Hadoop as a system for storing large, unstructured, data sets
  •  Big data integration: Hadoop as a data ingestion/ETL layer
  •  Big data analytic: Hadoop as a platform new new exploratory analytic applications

While much of the attention for Apache Hadoop use-cases focuses on the innovative analytic applications it has enabled and high-profile adoption at Web properties. Initial adoption of Hadoop at traditional enterprises and later adopters are more likely triggered by the first two features. Indeed there are some good examples of these three roles representing an adoption continuum.

We also see the multiple roles playing out at a vendor level, with regards to strategies for Hadoop-related products. Oracle’s Big Data Appliance, for example, is focused very specifically on Apache Hadoop as a pre-processing layer for data to be analyzed in Oracle Database.

While Oracle focuses on Hadoop’s ETL role, it is no surprise that the other major incumbent vendors showing interest in Hadoop can be grouped into three main areas:

  • Storage vendors
  • Existing database/integration vendors
  • Business intelligence/analytic vendors

This is just a small instance I took to showcase how the major DATA players are slowly adopting this new technology to harness its capabilities to retain there position in the major players list.