Sunday, 27 January 2013

Facebook Graph Search

As Facebook continues to roll out its new feature, more and more users are getting their hands on this new search tool. Facebook's stalking would now be limited only by user's creativity with search queries and how how they use this feature. Many people are scared by the thought that there stuff would go public. But it should be very clear that no private information is exposed by graph search. People are already sharing stuff on Facebook. Its just that it is now easily accessible. Lets say we want to search for the people who like cricket. One could have easily visited pages related to cricket and look at the people who liked those pages. With the introduction of graph search, we don't have to do this extra effort. We can just search. So, graph search only searches the information which is public. It does not breach the privacy of a person.I am trying to overstate that anyone who has a Facebook account and has not locked down his or her privacy settings is ignorant.  Here is what Facebook says :
"With Graph Search, you can look up anything shared with you on Facebook, and others can find stuff you've shared with them, including content set to Public. That means different people see different results."

Facebook's new Graph Search feature owes part of it thanks to Bing. Facebook and Microsoft have worked together for a number of years now with Microsoft investing $240 million in Facebook back in 2007. Now when users search for people, places and things using Facebook's search bar, the results would be shown alongside web results courtesy of Bing.

Of course, the same was the case when Google came and we had to learn to adjust to the fact that "internet never forgets" - the information that you posted online years ago can come back to haunt you. But Facebook has taken that to new level of intimacy because much of the stuff here seems so ephemeral : a like, a follow - things that you might not even remember doing.

Google is looking on in jealously, comparing the number of Google+ users to Facebook's gargantuan total and shedding a few tears. The reigning champ of search engines, Google did its best to socialize search with Google+. But Bing managed to make searching more social by tapping into Facebook. No matter how much Google denies, but seeing how many of your friends "Like" something on the web still carries much more weight than a "+1".


Official Facebook Graph Search Introduction page : http://newsroom.fb.com/News/562/Introducing-Graph-Search-Beta


Sunday, 6 January 2013

Allocation and De-allocation - Java Vs C/C++



Java allocation are cheaper than C/C++, an assertion that many developers have difficulty even imagining. The 1.0 and 1.1 JDKs used mark sweep garbage collector, which did compaction on only some collections. The result of this was that the heap might be fragmented after the garbage collection. This was comparable to C/C++. In C/C++, the allocator uses heuristics such as first-first or best-fit to manage the free heap space.

In Hot Spot JVM, generational collector was introduced. Now the entire heap space was divided into generations. In young generation, garbage collection was frequent than the older generations. Here the young generation uses copying collector due to which the free space in the heap is always contiguous so that allocation of a new object from the heap can be done through a simple pointer addition. This makes object allocation in Java significantly cheaper than in C. In C, the memory allocator will use first-fit or best-fit. First-fit heuristic will walk through the free space to find the first free memory space that can meet the requirement. On the other hand, best-fit will walk through 
the heap to find the best available free memory chunk that meets the requirement. Whereas in Hot Spot JVM, the allocator does not have to walk through the heap to allocate a chunk.

Lets talk about deallocation now. Copying collectors do not visit dead objects. So in applications with a large number of temporary objects, garbage collection is quite cheap in case of copying collector. It just copies the live objects to a survivor space and reclaims the entire heap in 1 full swoop. So, both allocation and deallocation costs per object went down in JDK 1.2. Sun approximates allocation costs at approximately 10 machine instructions.

JIT compiler performs additional optimizations that can significantly reduce the cost of allocation. It can allocate the object on the stack instead of on the heap. This is a very very powerful enhancement which uses the power of C. In Java, we can't create objects on the stack. But using stack saves a lot of time of allocation and deallocation. So, all the objects which have very small life time can be pushed created on the stack instead of in the heap. JIT compiler does it automatically. The allocation costs in Java are still going to go down with more optimization at the compiler level. So, there is no reason to compromise the maintainability for the sake of avoiding few extra allocations.

One question that rings is when a lot of threads are calling the allocator for some chunk of memory in the heap, isn't it a scalability issue ? And the answer is no. This is because JVM's use thread locals heaps. So, each thread is given a thread local heap which is of the order of 1K. So, when a thread asks for some chunk of memory, JVM tries to give it from the thread local heap. If thread local heap can't satisfy this demand, only then it goes to the global allocator. This case is very rare in most of the applications. So in most of the cases, we don't lock the global heap. Each thread uses its thread local heap. Hence, allocation is no longer a scalability issue.