SEO, PPC Go Together Like Peanut Butter & Jelly

Herdon Hasty attempts to compare SEO and PPC to peanut butter and jelly. SEO is the peanut butter holding together the campaign. PPC is the jelly, sweetening the deal. He suggests combining the two search advertising methods because efforts use the same concepts of headline, body copy, and landing page, and target the same customers.

So, Hasty steps through the process of selecting keywords, testing and applying processes, and measuring and adjusting the campaign as criteria changes. As an SEO expert, he views SEO as the “answer to everything from ‘how do I drive more traffic in a recession?’ to ‘what condiments are best to offer at a business-casual dinner party?’”

Read the whole story at Search Engine Watch

The new search engine Yebol is good at what it does

There is a new search engine in town called Yebol, a search engine that does manage to combine various search features in a way that sets it apart from its competitors.

Yebol has developed a technology that combines the advantages of a traditional Google like search engine (spider the web, and add an algorithm that determines what’s “best”) with a human touch (group high quality web sites into different categories that may be relevant to the query).

For more Information follow the link:

Raksha Bandhan

Raksha Bandhan or Rakhi as it is called is one of the most awaited festivals in India. Raksha Bandhan is a celebration of the relationship shared between a brother and a sister. Rakhi is a thoughtful and unique celebrations of its kind which is celebrated with lot of fanfare and excitement. Rakhi brings together people of all sects and is much appreciated for this unifying element that it brings with it. A sister ties a decorative thread called ‘Rakhi’ on the wrists of her brother symbolising her love and affection for her brother. The brother in return vows to protect his sister during all times and also buys his sister attractive gifts.

When is Raksha Bandhan?

Full moon day of the month of Shraavan (as per Hindu
calendar) in the months of July and August is when Raksha Bandhan is celebrated. Since Rakhi is celebrated as per the Hindu calendar the dates usually vary year after year. Check out the following dates:

In 2009- Raksha Bandhan will be celebrated on Wednesday, 5th of August.

In 2010- Raksha Bandhan will be celebrated on Tuesday, 24th of August.

What is Rakhi Ceremony?
The beautiful bond shared between a brother and sister is what gets celebrated on the day of Rakhi. The festival of Rakhi is eagerly awaited and lot of preparations go into ensuring that the day of Rakhi is a memorable day for the brother and the sister. Rakhi the sacred thread tied to a brother’s wrist is selected by the sister who ensures that she has chosen the brightest and the best designed Rakhi for her brother. Sweets are also chosen and along with the Rakhi, vermillion and few grains of rice form part of the Rakhi pooja thali. On this festival occasions gifts are exchanged and special Rakhi Recipes like Ghevarr, Vermicelli Kheer, Malpua, Kesar Burfi, Pista Sandeshs & Rava Laddoo are prepared.

What is Raksha Bandhan?
Raksha Bandhan is a festival which strengthens and celebrates the precious bond shared between a brother and a sister. Also known as Rakhi the festival of Raksha Bandhan is a day when the relationship shared between a brother and a sister is celebrated amongst beautiful Indian traditions and customs.

Rakhi Celebrations in India
Many different traditions are followed during the day of Raksha Bandhan some of the special ones include:

» Rakhi in India (West): Nariyal (Coconut) Purnima is celebrated in the west on the day of Raksha Bandhan. Coconuts are thrown in to the sea as a mark of respect and offering to Lord Varuna.
» Rakhi in India (South): In the southern part of India the day of Raksha Bandhan is celebrated as Avani Avittam. This day is auspicious especially for the Brahmin community as they change their religious symbol of janeyu (holy thread) amidst chanting of mantras.
» Rakhi in India (North): Rakhi Purnima also called Kajri Navami and Kajri Purnima is a celebration in North India when wheat is sown. Goddess Bhagwati is worshiped and the farmers seek her blessings for a good crop.
» Rakhi in India (East): Rakhi ceremony was initiated long back in 1905 by Rabindranath Tagore in Shanti Niketan and the custom is still followed religiously by the students of Shanti Niketan.

Importance of Raksha Bandhan Festival
Almost all the festivals celebrated in India act as a unifying factor in bringing together people of India. Raksha Bandhan or Rakhi is no exception as this beautiful festival of India also binds people together in a display of . Raksha Bandhan celebrates a very precious relationship shared between a brother and a sister. The day of Raksha Bandhan has over a period of time gained tremendous importance in India. Rakhi signifies a bond of love and care between a brother and his sister and contributes towards the social harmony of India. One can clearly witness celebrations and involvement in the festival of Rakhi by people cutting across man-made barriers like religion, caste, color, etc. and reaching out to each other. This display of oneness is what makes Raksha Bandhan a special and important festivals of India.

Different Types of Rakhi Threads
Keeping up with time, the Rakhi Thread has become more fashionable and trendy. It flaunts the amalgamation of tradition and modern lifestyle
of people. Modern rakhi is available in different shapes, sizes and materials. It can even be made of gold attached with diamonds. In the Indian market, few interesting varieties of rakhi are as the following:

» Beads Rakhi
» Cartoon Rakhi, Toy Rakhi
» Currency Note/Coin Rakhi
» Floral Rakhi
» Gold-Silver coated Rakhi
» Musical Rakhi
» Divine Rakhi – Ram Rakhi, Shree-Om Rakhi
» Resham Rakhi
» Sandalwood Rakhi
» Bhaiya-Bhabhi Rakhi

Different Rakhi Thalis

The beautifully decorated Rakhi Thali gives an impression of your love and care for your brother. The thali contains one diya, tika or roli, rice, beatle leaves, rakhi thread, flowers
and incense sticks. The thali is either of silver, brass or simply of steel. Sweets sometimes add to the decor of the thali. Few types of Rakhi Thali are cited below:

» Floral Thali
» Sweets Rakhi Thali
» Painted Thali

» Choco-toffee Thali

» Roli-turmeric Thali

The History of Raksha Bandhan Festival
The Tale of Lord Bali and Goddess Laxmi
The king of devils Bali was a great devotee of Lord Vishnu. One day Bali approached to Lord Vishnu for safeguarding his kingdom. Lord Vishnu took this task and decided to leave his heavenly home, but Goddess Laxmi wanted his husband not to leave his home. She reached to the house of Bali as a camouflaged Brahmin woman and asked for shelter.

On Shravan Purnima day Laxmi Ji tied a sacred thread on the wrist of King Bali and revealed her purpose for being there. Touched by the concern of Laxmi Ji towards her family, King Bali requested Lord Vishnu to live with her.

Therefore the Rakhi festival is also called ‘Baleva’ that means the devotion of King Bali to Lord Vishnu.

A Mahabharata Tale
Before the battle of Mahabharata, Lord Krishna told Yudhisthir, elder Pandava brother, to perform rakhi ceremony which would act as a shield for him and his army. Draupadi, wife of the Pandavas, tied a thread on Lord Krishna’s wrist, seeking his blessings for her husbands.

King Porus and Alexandar’s wife
Another rakhi tale comes from the battle between Alexander the Greek king, and Porus, the Hindu king. Wife of Alexander sent a sacred thread to Porus, asking him not to harm her husband in battle. In accordance with Hindu traditions, Porus gave full respect to rakhi. In the battlefield, when Porus was about to deliver a final blow on Alexander, he saw the rakhi on his hand and restrained himself from attacking Alexander personally.

Ways of Sending Rakhis?
Rakshabandhan festival is all about love, care and righteousness. Tying of a frail Rakhi thread- that is considered more stronger than iron chains-has been the tradition of this festival since the time immemorial. Even today the customs and traditions are the same but the way of celebrating the Rakhi festival has been changed. Different ways are used to express the emotions attached with the relationship.

Children also handle Multi-tasking Effectively

Multi tasking

Children also perform multitasking very effectively. Starting at the age of 7, they are assigned task of handling 8 to 9 subjects. These subjects are completely different from one another. History of Greek culture is completely different from Science of Newton and Charles Darwin. The mathematical formulas are complicated and Shakespeare phrases are difficult to learn. But children will not request teachers to remove one subject or couple of them from their syllabus. Instead, they take it as challenge and perform extremely well in all the subjects with in the given time.

Time Management

Children utilize the time and complete the work assigned to them as per the advice given by their teachers. They knew the priority of the subject and work on the poor subjects to cover the overall percentage in their academics. Some genius students plan for a month and divide their time with a weekly schedule and even allot some time for sports and video games.

Meeting the Dead Lines

Every child is equally treated in the school as they are given a time for the examination. Which is the deadline to cover all the subjects and the deadline is always fixed and no relaxation in it. Children are aware of the time period and utilize it effectively. Past experience of Slip test and Unit test help them to analyze the time taken for each subject. This will help a lot in preparing for the Final Exams.

Targets Achieved

Every subject has certain targets such as 35 marks, 40 marks etc. Some reach the targets and are happy but some are not satisfied even after getting 80 marks. The enthusiasm is high in some students who try to achieve more than 80 % or 90 % in every exam they write. Even though teachers fix targets, genius students fix their own and study to reach their goal.

-Syed Nouman
Paid Search Engine Marketing Executive

Scaling your J2EE Applications – Wang Yu

If an application is useful, then the network of users will grow crazily fast at some point. As more and more mission-critical applications are now running on Java EE, many Java developers are caring about scalability issues. However, most of popular Web 2.0 sites are built with script languages, and there are a lot of voices to doubt the scalability of Java Applications. In this article, Wang Yu takes real world cases as examples to explain ways on how to scale Java applications based on his experiences on the laboratory projects, and at the same time, bring together practice, science, algorithms, frameworks, and experience on failed projects, to help readers on building high scalable Java applications.

I have been working in an internal laboratory for years. This laboratory is always equipped with the latest big servers from our company and is free for our partners to test the performance of their products and solutions. Part of my job is to help them tune the performance on all kinds of the powerful CMT and SMP servers.

In these years, I have helped testing dozens of Java applications in variety of different solutions. Many products are aimed for the same industry domains and have very similar functionalities, but the scalability is so different that some of them can not only scale up on the 64 CPUs servers, but also scale out to more than 20 server nodes, while others can only be running on the machines with no more than 2 CPUs.

The key for the difference lies in the vision of the architect when designing the products. All these scaled-well Java applications were well prepared for the scalability, from the requirement collection phase, system design phase to the implementation phase of the products’ life cycle. Your Java application scalability is really based on your vision.

Scalability, as a property of systems, is generally difficult to define, and is often mix-used with “performance”. Yes, yes, scalability is closely related with performance, and its purpose is to get high performance. But the measurement for “scalability” is different from “performance”. In this article, we will take the definitions from wikipedia:

Scalability is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added.

To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer. Such vertical scaling of existing systems also enables them to leverage virtualization technology more effectively, as it provides more resources for the hosted set of operating systems and application modules to share.

To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might be scaling out from one web server system to three. As computer prices drop and performance continues to increase, low cost “commodity” systems can be used for high performance computing applications such as seismic analysis and biotechnology workloads that could in the past only be handled by supercomputers. Hundreds of small computers may be configured in a cluster to obtain aggregate computing power which often exceeds that of traditional RISC processor based scientific computers.

The first installment of this article will discuss scaling Java applications vertically.

How to scale Java EE applications vertically

Many software designers and developers take the functionality as the most important factor in a product while thinking of performance and scalability as add-on features and after-work actions. Most of them believe that expensive hardware can close the gap of the performance issue.

Sometimes they are wrong. Last month, there was an urgent project in our laboratory. After the product failed to meet the performance requirement of their customer in a 4-CPU machine, the partner wanted to test their product in a bigger (8-CPU) server. The result was that the performance was worse than in the 4-CPU server.

Why did this happen? Basically, if your system is a multiprocessed or multithreaded application, and is running out of CPU resources, then your applications will most likely scale well when more CPUs added.

Java technology-based applications embrace threading in a fundamental way. Not only does the Java language facilitate multithreaded applications, but the JVM is a multi-threaded process that provides scheduling and memory management for Java applications. Java applications that can benefit directly from multi-CPU resources include application servers such as BEA’s Weblogic, IBM’s Websphere, or the open-source Glassfish and Tomcat application server. All applications that use a Java EE application server can immediately benefit from CMT & SMP technology.

But in my laboratory, I found a lot of products cannot make full usage of the CPU resources. Some of them can only occupy no more than 20% CPU resources in an 8-CPU server. Such applications can benefit little when more CPU resources added.

Hot lock is the key enemy of scalability

The primary tool for managing coordination between threads in Java programs is the synchronized keyword. Because of the rules involving cache flushing and invalidation, a synchronized block in the Java language is generally more expensive than the critical section facilities offered by many platforms. Even when a program contains only a single thread running on a single processor, a synchronized method call is still slower than an un-synchronized method call.

To observe the problems caused by the synchronized keyword, just send a QUIT signal to the JVM process, which gives you a thread dump. If you have seen a lot of thread stacks just like the following in the thread dump file, which means that your system hits “Hot Lock” problem.

"Thread-0" prio=10 tid=0x08222eb0 nid=0x9 waiting for monitor entry
	- waiting to lock <0xef63bf08> (a java.lang.Object)
	- locked <0xef63beb8> (a java.util.ArrayList)

The synchronized keyword will force the scheduler to serialize operations on the synchronized block. If many threads compete for the contended synchronizations, and only one thread is executing a synchronized block, then any other threads waiting to enter that block are stalled. If no other threads are available for execution, then processors may sit idle. In such situations, more CPUs can help little on performance.

Hot Lock may involve multiple thread switches and system calls. When multiple threads contend for the same monitor, the JVM has to maintain a queue of threads waiting for that monitor (and this queue must be synchronized across processors), which means more time spent in the JVM or OS code and less time spent in your program code.

To avoid the hot lock problem, following suggestions may be helpful:

Make synchronized blocks as short as possible

When you make the time a thread holds a given lock shorter, the probability that another thread competes with the same lock will become lower. So while you should use synchronization to access shared variables, you should move the thread safe code outside of the synchronized block. Take following code as an example:

Code list 1:
public boolean updateSchema(HashMap nodeTree) {
synchronized (schema) {
	String nodeName = (String)nodeTree.get("nodeName");
	String nodeAttributes = (List)nodeTree.get("attributes");
		if (nodeName == null)
			return false;
		return schema.update(nodeName,nodeAttributes);

This piece of code wants to protect the shared variable “schema” when updating it. But the code for getting attribute values is thread safe, and can be moved out of the block, making the synchronized block shorter:

Code list 2:
public boolean updateSchema(HashMap nodeTree) {
String nodeName = (String)nodeTree.get("nodeName");
String nodeAttributes = (List)nodeTree.get("attributes");
synchronized (schema) {
	if (nodeName == null)
		return false;
	return schema.update(nodeName,nodeAttributes);

Reducing lock granularity

When you are using a “synchronized” marker, you have two choices on its granularity: “method locks” or “block locks”. If you put the “synchronized” on a method, you are locking on “this” object implicitly.

Code list 3:
public class SchemaManager {
	private HashMap schema;
	private HashMap treeNodes;
	public boolean synchronized updateSchema(HashMap nodeTree) {
	String nodeName = (String)nodeTree.get("nodeName");
	String nodeAttributes = (List)nodeTree.get("attributes");

		if (nodeName == null) return false;
		else return schema.update(nodeName,nodeAttributes);

public boolean synchronized updateTreeNodes() {

Compared the code with Code list 2, this piece of code is worse, because it locks on the entire object when calling “updateSchema” method. To achieve finer granularity, just lock the “schema” instance variable instead of the all “SchemaManager” instances to enable different methods to be paralleled.

Avoid lock on static methods

The worst solution is to put the “synchronized” keywords on the static methods, which means it will lock on all instances of this class. One of projects tested in our laboratory had been found to have such issues. When tested, we found almost all working threads waiting for a static lock (a Class lock):

at sun.awt.font.NativeFontWrapper.initializeFont(Native Method)
- waiting to lock <0xeae43af0> (a java.lang.Class)
at java.awt.Font.initializeFont(
at java.awt.Font.readObject(
at sun.reflect.GeneratedMethodAccessor147.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at Source)

When using Java2D to generate font objects for the reports, the developers put a native static lock on the “initialize” method. To be fair, this was caused by Sun’s JDK 1.4 (Hotspot). After changing to JDK 5.0, the static lock disappeared.

Using lock free data structure in Java SE 5.0

The “synchronized” keyword in Java is simply a relatively coarse-grained coordination mechanism, and as such, is fairly heavy for managing a simple operation such as incrementing a counter or updating a value, like following code:

Code list 4:
public class OnlineNumber {
	private int totalNumber;
	public synchronized int getTotalNumber() { return totalNumber; }
	public synchronized int increment() { return ++totalNumber; }
	public synchronized int decrement() { return --totalNumber; }

The above code is just locking very simple operations, and the “synchronized” blocks are very short. However, if the lock is heavily contended (threads frequently ask to acquire the lock when it is already held by another thread), throughput can suffer, and contended synchronization can be quite expensive.

Fortunately, in Java SE 5.0 and above, you can write wait-free, lock-free algorithms under the help with hardware synchronization primitives without using native code. Almost all modern processors have instructions for updating shared variables in a way that can either detect or prevent concurrent access from other processors. These instructions are called compare-and-swap, or CAS.

A CAS operation includes three parameters — a memory location, the expected old value, and a new value. The processor will update the location to the new value if the value that is there matches the expected old value; otherwise it will do nothing. It will return the value that was at that location prior to the CAS instruction. An example way to use CAS for synchronization is as following:

Code list 5:
public int increment() {
	int oldValue = value.getValue();
	int newValue = oldValue + 1;
	while (value.compareAndSwap(oldValue, newValue) != oldValue)
	oldValue = value.getValue();
	return oldValue + 1;

First, we read a value from the address, then perform a multi-step computation to derive a new value (this example is just increasing by one), and then use CAS to change the value of address from oldValue to the newValue. The CAS succeeds if the value at address has not been changed in the meantime. If another thread did modify the variable at the same time, the CAS operation will fail, but detect it and retry it in a while loop. The best thing of CAS is that it is implemented in hardware and is extremely lightweight. If 100 threads execute this increment()method at the same time, in the worst case each thread will have to retry at most 99 times before the increment is complete.

The java.util.concurrent.atomic package in Java SE 5.0 and above provides classes that support lock-free thread-safe programming on single variables. The atomic variable classes all expose a compare-and-set primitive, which is implemented using the fastest native construct available on the platform. Nine flavors of atomic variables are provided in this package, including: AtomicInteger; AtomicLong; AtomicReference; AtomicBoolean; array forms of atomic integer; long; reference; and atomic marked reference and stamped reference classes, which atomically update a pair of values.

Using an atomic package is easy. To rewrite the increasing method of code list 5:

Code list 6:
import java.util.concurrent.atomic.*;

   private AtomicInteger value = new AtomicInteger(0);
   public int increment() {
     return value.getAndIncrement();

Nearly all the classes in the java.util.concurrent package use atomic variables instead of synchronization, either directly or indirectly. Classes like ConcurrentLinkedQueue use atomic variables to directly implement wait-free algorithms, and classes like ConcurrentHashMap use ReentrantLock for locking where needed. ReentrantLock, in turn, uses atomic variables to maintain the queue of threads waiting for the lock.

One successful story about the lock free algorithms is a financial system tested in our laboratory, after replaced the “Vector” data structure with “ConcurrentHashMap”, the performance in our CMT machine(8 cores) increased more than 3 times.

Race condition can also cause the scalability problems

Too many “synchronized” keywords will cause the scalability problems. But in some special cases, lack of “synchronized” can also cause the system fail to scale vertically. The lack of “synchronized” can cause race conditions, allowing more than two threads to modify the shared resources at the same time, and may corrupt some shared data. Why do I say it will cause the scalability problem?

Let’s take a real world case as an example. This is an ERP system for manufacture, when tested its performance in one of our latest CMT servers (2CPU, 16 cores, 128 strands ), we found the CPU usage was more than 90%. This was a big surprise, because few applications can scale so well in this type of machine. Our excitement just lasted for 5 minutes before we discovered that the average response time was very high and the throughput was unbelievable low. What were these CPUs doing? Weren’t they busy? What were they busy with? Through the tracing tools in the OS, we found almost all the CPUs were doing the same thing – “HashMap.get()”, and it seemed that all CPUs were in infinite loops. Then we tested this application on diverse servers with different numbers of CPUs. The result was that the more CPUs the server has, the more chances this infinite loop would happen.

The root cause of the infinite loop is on an unprotected shared variable– a “HashMap” data structure. After added “synchronized” marker to all the access methods, everything was normal. By checking the source code of the “HashMap” (Java SE 5.0), we found there was some potential for such an infinite loop by corrupting its internal structure. As shown as following code, if we make the entries in the HashMap to form a circle, then “” will never be a null.

Code list 7:
public V get(Object key) {
if (key == null) return getForNullKey();
	int hash = hash(key.hashCode());
	for (Entry<K,V> e = table[indexFor(hash, table.length)];
		e != null;
		e = {
		Object k;
		if (e.hash == hash && ((k = e.key) == key || key.equals(k)))
			return e.value;
	return null;

Not only its get() method, but also put() and other methods are all exposed by this risk. Is this a bug of JVM? No, this was reported long time ago (please refer to Sun engineers didn’t think it a bug, but rather suggested the use of “ConcurrentHashMap”. So take it into consideration when building a scalable system.

Non-Blocking IO vs. Blocking IO

The java.nio package, which was introduced in Java 1.4, allows developers to achieve greater performance in data processing and offers better scalability. The non-blocking I/O operations provided by NIO allows for Java applications to perform I/O more like what is available in other lower-level languages like C. There are a lot of NIO frameworks currently, such as Mina from Apache and Grizzly from Sun, which are widely used by many projects and products.

During the last 5 months, there were two Java EE projects hold in our laboratory which only wanted to test their products’ performance on both traditional blocking-I/O based servers and non-blocking I/O based servers, to see the difference. They chose Tomcat 5 as blocking-I/O based servers, and Glassfish as Non-blocking I/O based servers.

First, they tested a few simple JSP pages and Servlets, got the following result (on a 4-CPUs server):

Concurrent Users
Average Response Time (ms)

The performance of Glassfish was far behind Tomcat’s according to the test result. The customer doubted about the advantage of non-blocking I/O. Why so many articles and technical reports are telling about the performance and scalability of the NIO?

After tested more scenarios, they changed their mind, for the results showed the power of NIO little by little. What they have tested are:

  1. More complex scenarios instead of simple JSPs and Servlets, involving EJB, Database, file IO, JMS and transactions.
  2. Simulating more concurrent users, from 1000 up to 10,000.
  3. Testing in different hardware environments, from 2CPUs, 4CPUs, up to 16 CPUs.

The figure below shows the results of the testing on a 4-CPU server.

Figure 1: Throughput in a 4CPU server

Traditional blocking I/O will use a dedicated working thread for a coming request. The assigned thread will be responsible for the whole life cycle of the request – reading the request data from the network, decoding the parameters, computing or calling other business logical functions, encoding the result, and sending it out to the requester. Then this thread will return to the thread pool and be reused by other requests. This model in Tomcat 5 is very effective when dealing with simple logical in a small number of concurrent users under perfect network environments.

But if the request involves complex logic, or interacts with outer system such as file systems, database, or a message server, the working thread will be blocked at the most of the processing time to wait for the return of Syscalls or network transfers. The blocking thread will be held by the request until finished, but the operating system will park this thread to relieve the CPU to deal with other requests. If the network between the clients and the server is not very good, the network latency will block the threads longer. Even more, when keep-alive is required, the current working thread will be blocked for a long time after the request processing is finished. To better utilized the CPU resources, more working threads are needed.

Tomcat uses a thread pool, and each request will be served by any idle thread in the thread pool. “maxThreads” decides the maximum number of threads that Tomcat can create to service requests. If we set “maxThreads” too small, we cannot fully utilize the CPU resources, and more important, will get a lot of requests dropped and rejected by the server when concurrent users increases. In this testing, we set “maxThreads” to “1000″ (which is too large and unfair to Tomcat). Under such settings, Tomcat will span a lot of threads when concurrent users go up to a high level.

The large number of Java threads will cause the JVM and OS busy with handling scheduling and maintenance work of these threads, instead of processing business logic. More over, more threads will consume more JVM heap memory (each thread stack will occupy some memory), and will cause more frequent garbage collection.

Glassfish doesn’t need so many threads. In non-blocking IO, a working thread will not binding to a dedicated request. If one request is blocking due to any reasons, this thread will reuse by other requests, In such way, Glassfish can handle thousands of concurrent users by only tens of working threads. By limiting the threads resources, Non-blocking IO has better scalability (refer to the figure below). That’s the reason that Tomcat 6 has embraced non-blocking IO too.

Figure 2: scalability test result

Single thread task problem

A Java EE-based ERP system was tested in our laboratory months ago, and one of its testing scenarios was to generate a very complex annual report. We tested this scenario in different servers and found that the cheapest AMD PC server got the best performance. This AMD server has only two 2.8G HZ CPUs and 4G memory, yet its performance exceeded the expensive 8-CPUs SPARC server shipped with 32G memory.

The reason is because that scenario is a single thread task, which can only be run by a single user (concurrently access by many users is meaningless in this case ). So it can just using one CPU when running. Such a task cannot scale to multi-processors. At the most of time, the frequency of CPU plays the leading role of the performance in such cases.

Parallelization is the solution. To parallelize the single thread task, you must find a certain level of independence in the order of operations, then use multiple threads to achieve the parallelization. In this case, the customer had refined their “annual report generation” task to generate monthly reports first, then generate the annual report based on those 12 monthly reports. “Monthly reports” are just transition results, since such reports are useful for the end users. But “monthly reports” can be generated concurrently and will be used to generate the final report quickly. In this way, this scenario was scaled to 4-CPU SPARC servers very well, and exceeded the AMD server more than 80% on performance.

Re-architecture and re-code the whole solution is a time consuming work and error prone. One of projects in our laboratory used JOMP and achieved parallelization for its single-thread tasks. JOMP is a Java API for thread-based SMP parallel programming. Just like OpenMP, JOMP uses compiler directives to insert parallel programming constructs into a regular program. In a Java program, the JOMP directives take the form of comments beginning with //omp. The JOMP program is run through a precompiler which processes the directives and produces the actual Java program, which is then compiled and executed. JOMP supports most features of OpenMP, including work-sharing parallel loops and parallel sections, shared variables, thread local variables, and reduction variables. The following code is an example of JOMP programming.

Code list 8:
Li n k e dLi s t c = new Li n k e dLi s t ( ) ;
c . add ( " t h i s " ) ;
c . add ( " i s " ) ;
c . add ( " a " ) ;
c . add ( "demo" ) ;
/ / #omp p a r a l l e l i t e r a t o r
f o r ( S t r i n g s : c )
System . o u t . p r i n t l n ( " s " ) ;

Like most parallelizing compilers, JOMP also focus on loop-level and collection parallelism, studying how to execute different iterations simultaneously. To be parallelized, two iterations shouldn’t present any data dependency-that is, neither should rely on calculations that the other one performs.

To write a JOMP program is not an easy work. First, you should familiar with OpenMP directives, and familiar with JVM Memory Model’s mapping for those directives, then know your business logic to put the right directives on the right places.

Another choice is to use Parallel Java. Parallel Java, like JOMP, supports most features of OpenMP; but unlike JOMP, PJ’s parallel constructs are obtained by instantiating library classes rather than by inserting precompiler directives. Thus, “Parallel Java” needs no extra precompilation step. Parallel Java is not only useful for the parallelization on multiple CPUs, but also for the scalability on multiple nodes. The following code is an example of “Parallel Java” programming.

Code list 9:
static double[][] d;
new ParallelTeam().execute (new ParallelRegion()
	public void run() throws Exception
		for (int ii = 0; ii < n; ++ ii)
			final int i = ii;
			execute (0, n-1, new IntegerForLoop()
					public void run (int first, int last)
						for (int r = first; r <= last; ++ r)
                           for (int c = 0; c < n; ++ c)
								d[r][c] = Math.min (d[r][c],
								d[r][i] + d[i][c]);

Scale Up to More Memory

Memory is an important resource for your applications. Enough memory is critical to performance in any application, especially for database systems and other I/O-focused systems. More memory means larger shared memory space and larger data buffers, to enable applications read more data from the memory instead of slow disks.

Java garbage collection relieves programmers from the burden of freeing allocated memory, in doing so making programmers more productive. The disadvantage of a garbage-collected heap is that it will halt almost all working threads when garbage is collecting. In addition, programmers in a garbage-collected environment have less control over the scheduling of CPU time devoted to freeing objects that are no longer needed. For those near-real-time applications, such as Telco systems and stock trade systems, this kind of delay and less controllable behavior are big risks.

Coming back to the question of whether Java applications scale by given more memory, the answer is yes, sometimes. Too little memory will cause garbage collection to happened too frequently. Enough memory will keep the JVM processing your business logic most of time, instead of collecting garbage.

But it is not always true. A real world case in my laboratory is a Telco system built on a 64-bit JVM. By using a 64-bit JVM, the application can break the limit of 4GB memory usage found in a 32-bit JVM. It was tested on a 4-CPU server with 16GB memory, and they gave 12GB memory to the Java application. In order to improve the performance, they cached more than 3,000,000 objects in memory when initialization to avoid creating too many objects when running. This product was running very fast during the first hour of testing, then suddenly, system halted for more than 30 minutes. We had determined that it was the garbage collection that stopped the system for half an hour.

Garbage collection is the process of reclaiming memory taken up by unreferenced objects. Unreferenced objects are ones the application can no longer reach because all references to them have gone out of extent. If a huge number of live objects exist in the memory (just like the 3,000,000 cached objects), the garbage collection process will take a long time to traverse all these objects. That’s why the system halted for such a long and unacceptable time.

In other memory-centric Java applications tested in our laboratory, we found the following characteristics:

  1. Every request processing action needed big and complex objects
  2. It kept too many objects into HttpSession for every session.
  3. The HttpSession timeout was too long, and HttpSession was not explicitly invalidated.
  4. The thread pool, EJB pool or other objects pool was set too large.
  5. The objects cache was set too large.

Those kinds of applications don’t scale well. When the number of concurrent users increasing, the memory usage of those applications increases largely. If large numbers of live object cannot be recycled in time, the JVM will spend considerable time on garbage collection. On the other hand, if given too much memory (in a 64-bit JVM), the JVM will still spend considerable time on garbage collection after running for a relatively long time.

The conclusion is that Java applications are NOT scalable by given too much memory. In most cases, 3GB memory assigned to Java heap (through “-Xmx” option) is enough (in some operating systems, such as Windows and Linux, you may not be able to use more than 2G memory in a 32-bit JVM). If you have more memory than the JVM can use (memory is cheap these days), please give the memory to the other applications within the same system, or just leave it to the operating system. Most OSs will use spare memory as a data buffer and cache list to improve IO performance.

The Real Time JVM (JSR001) has the ability to let the programmer control memory collection. Applications can use this feature to say to the JVM “Hi, this huge space of memory is my cache, I will take care of it myself, please don’t collect it automatically”. This functionality can make Java applications scale on the huge memory resources. Hope JVM vendors will bring it into the normal free JVM versions in the near future.

To scale these memory-centric Java applications, you need multiple JVM instances, or multiple machine nodes.

Other Scale Up Problems

Some scalability problems in Java EE applications are not related to themselves. The limitation from external systems sometime will become the bottleneck of scalability. Such bottlenecks may include:

  • Database management system: This is the most common bottleneck for most of enterprise and Web 2.0 applications, for the database is normally shared by the JVM threads. So effectiveness of database access, and the isolation levels between database transactions will affect the scalability significantly. We have seen a lot of projects where most of the business logic resides in the database in terms of stored procedures, while keeping the Web tier very lightweight just to perform simple data filtering actions and process the stored procedures in database. This architecture is causing a lot of issues with respect to scalability as the number of requests grow.
  • Disk IO and Network IO
  • Operating System: Sometimes the scalability bottleneck may lie in the limitation of the operating system. For example, putting too many files under the same directory can cause file systems to slow when creating and finding a file.
  • Synchronous logging: This is a common problem about scalability. In some of the cases, the problem was solved by using a logging server such as Apache log4j. Others have used JMS messages to convert synchronous logging to asynchronous one.

These are not only problems for Java EE applications, but for all systems on any platform. To resolve these problems need help from database administrators, system engineers and network analyzers on all the levels of the systems.

The second installment of this article will discuss problems with scaling horizontally.

About the Author

Wang Yu presently works for ISVE group of Sun Microsystems as a Java technology engineer and technology architecture consultant. His duties include supporting local ISVs, evangelizing and consulting on important Java technologies such as Java EE, EJB, JSP/Servlet, JMS, Web services technologies. He can be reached at

How to Become a Link Building Ninja

You’re about to begin your journey to becoming a link building ninja.

Oh?  You don’t know what a link building ninja is…

Hmmm. Let me grab my dictionary.

Here we are… A link building ninja is one who is skilled in the art of highly advanced link building; one who can find link building opportunities untapped by other marketers; a person trained to reverse engineer the link building strategies of others and destroys their Google rankings with ninja stealth. They rarely discover how their competitors stole all of their precious rankings.

So with that, let’s dig in…

1.Here’s One Link Building Tactic That Nobody Talks About…

As you probably know, syndicating your RSS feed to all the different RSS directories and aggregators can be a great way to get one-way backlinks.

Plus, they’re one of the best ways to get indexed fast in Google.

In this article, I’m going to show you how to put your RSS feed on steroids and grab 10 times as many backlinks.

The secret is in creating custom RSS feeds.

The idea is to create multiple, custom RSS feeds and submit those to several RSS feed aggregator sites. This spreads your links all over the Internet.

Talk about some link juice!

You can do this using a service called RSSMix.

Watch this step-by-step video tutorial to find out how to create your own customized RSS feeds.

This site allows you mix any number of RSS feeds into one new feed, which opens up lots of new link building opportunities. You can create your own customized RSS feeds from your articles, videos, podcasts, Squidoo lenses, Hubpages, and more. In fact, even EzineArticles has an RSS feed for each user.

Any of the content you have on the Internet can be put into your own customized RSS feeds. These can the be submitted to the top RSS directories, giving you hundreds of one-way backlinks.

Here’s another quick tip on how to use the power of RSS Mix…

If you use Web 2.0 properties like Squidoo, blogspot, social bookmarking sites, and others to generate backlinks, you can create one big RSS feed from all of these sources using RSS Mix.

By creating one big RSS feed, you can then submit that feed to all the different RSS directories. This ensures that Google indexes all of the pages that contain your backlinks.

If Google doesn’t index the page you have a backlink from, then it doesn’t count. You can use RSS feeds to make sure that all of your backlinks get counted in Google. I hope you see the power in this technique!

You can also do this for all of the articles you submit around the net. If a particular page doesn’t have an RSS feed, don’t worry. You can turn any page into an RSS feed using Any time you add a new resource, add it’s RSS feed to your “One Big RSS feed”.

If you want to get a little more advanced, you can use a service called Yahoo Pipes. Yahoo Pipes allows you to apply filters to your main RSS feeds. So, for example, you could create a customized RSS feed for all of the content on your site related to “diets”. You could then create another customized RSS feed for all of the content on your site related to “workout plans”.

You’re basically breaking your main RSS feed down into more targeted feeds. This technique allows you to create unlimited RSS feeds!

You can then submit each of these to all the different RSS aggregator sites, multiplying your results and the number of backlinks you receive.

2. Widgets

Another unique way to build hundreds of inbound links is known as widget marketing. Widgets are mini internet applications that are offered via a 3rdparty. They can easily be integrated into web pages, Facebook profiles,iGoogle start pages, and more.

Because of this, a widget can spread your link to thousands of otherwebsites.

Some popular widgets include quizzes, games, weather reports, and Flickrslideshows. It seems that everyone is starting to create their own widgets. has their own widget to let customers show off their favoriteAmazon products. USA Today offers a widget that enables bloggers to displaythe latest new updates.

Why shouldn’t you have one also? If you don’t know how to program, don’tworry. You can instantly turn your blog into a widget at

Once you have created your widget, you can promote your widgets on the following websites…


Google Widget Directory

Yahoo Widgets



Widget Gallery

Snipperoo Widget Directory

RateitAll Widgets

Friendster Widget Directory

Netvibes Widget Diretory

Xanga Widget Directory

The popular online dating site JustSayHi used widgetbait to rank on the frontpage of Google for highly competitive terms like “Free Online Dating” and”Online Dating”.

They did this by creating popular quizzes and turning them into a widget.Some of their quizzes include titles like “How Many Five Year Olds Could You Take in a Fight?” and “The Geek Quiz”. You can check out an entire list of the quizzes they have created at

However, keep in mind that they have recently changed their name to, which is ranking #1 in Google for “free online dating” – largelydue to the links they’ve built using widgets.

You can read the full story at…

You can also turn your widget into a Facebook app with a free tool called App Accelerator. Popular Facebook applications can send thousands of visitors to your site per day. It’s just a matter of being creative and taking action.

3. Instant Backlinks

Here’s a cool trick for getting instant backlinks…


At first glance, Zimbio might look like any other user generated conten tsite. However, when you look closely, you’ll find that it’s a link-generatingmachine.

You can use Zimbio to get new backlinks every time you post to your blog.

Once you’ve created an account, simply submit your blog URL or your feed URL, and Zimbio will instantly import your blog posts, providing you with instant backlinks every time you post to your blog.

Plus, your articles will also be promoted to related wikizines within Zimbio, driving even more traffic to your website.

Here’s another interesting technique for getting instant backlinks…

Want to build your reputation and create high quality backlinks at the same time? allows you to do just that.

You create a personal LookUpPage with information about you, your website, and contact info. Plus, you get to add links to all your different websites with anchor text included. These are the types of juicy links that Google loves. So go set up your unique profile page, build your brand, and get some authoritative backlinks at the same time.

Naymz is another free website that allows you to set up a profile centered around your name and reputation. All you have to do is sign up, create a profile, and add your website URL under the “links” category.

4. Here’s another interesting find. It’s called Cool Site of the Day. This site lists some of the most interesting sites from around the web. Best of all, you can submit your site as well if you think it’s up to snuff. Not only does this present an excellent link opportunity, but Cool Site of the Day is known for sending thousands of visitors to their featured sites.

If your site is chosen, it will be featured on the homepage and announced to their email list of over 150,000 subscribers.

Much like Digg, Cool Site of the Day also has a ripple effect. The featured sites are often picked up by journalists throughout the world. Some of the previously featured sites have been quoted on the BBC News website, featured in USA Today, picked up by radio stations and quoted in a number of other media outlets.

Sites similar to Cool Site of the Day that you can submit to include…





Yahoo Picks


Family First

Do you have a unique or entertaining website? If so, you can drop your links at the following websites…





College Humor



5. Blog Reviews

Another great way to build inbound links is through blog reviews. No, I’m not talking about paid reviews from sites like There are actually lots of sites that are willing to exchange reviews.

There’s a site called ReviewBackthat allows you to exchange reviews with other bloggers. Reviewback is an excellent alternative for those who don’t want to spend their money on ReviewMe.

You can also find a nice list of web sites that offer review exchanges at is just one of the popular online blogs that does mutual reviews.

This is an excellent opportunity to get a very high-quality, authoritative backlink.

6. Recoup Your Link Juice

Here’s an innovative way to increase your rankings. I call it recouping your link juice. Here’s the idea…

If you’ve been online for any time at all, then you probably already have incoming links from other websites.

However, are they using the anchor text for the keyword phrase you would liketo rank for? If not, you can email the website owner and ask them to changethe anchor text to your desired keyword phrase. This one simple step can helpshoot up your ranking for your desired keywords.

7. Coupon Sites

Here’s a tactic I see very few sites using – submitting your website to coupon directories. Besides building links, you can also generate new customers and build some buzz around your brand.

Some of the most popular places you can submit your coupons to are Coupon Cabin, Ultimate Coupons, Free Shipping, and Deal Taker.

For a complete list of coupon sites, go to…

There are lots of ways to create coupons for your site. For example, checkout the following web page to see a Link Building Service that is using coupon sites to advertise their service.

Coupon sites are an excellent option if you sell any kind of product or service.

You can offer discounts, promotional codes, or even rebates.

Be creative!

8. Underground Link Building Tool

Here’s one of my favorite tools for finding hidden link buildingopportunities. It can be found at…

This tools searches for websites of the theme you specify that contain keyphrases like “Add link”, “Add site”, “Add URL”, “Add URL”, “Submit URL”, “Add Article” etc.

I’ve found lots of great link building opportunities using this tool.

Try it Out!

9. CSS Galleries

CSS galleries are another untapped source of traffic and links. has received over 25,000 visitors from CSS Galleries . If you have a savvy CSS design,be sure to submit it to all of the major CSS galleries.











Each of these sites can also send you quick and easy traffic.

10. Internal Link Structure

One of the best linking strategies that people take for granted is the power of their own internal link structure.

Proper internal link structure will insure that your website gets properly spidered and all pages are found and indexed by the search engines. It will also increase the PageRank (Link Juice) that flows to your internal pages, thus raising your search engine rankings.

Simply by optimizing your internal link structure, you can increase your search engine rankings. By adding the right links in strategic places, we can boost our own search engine rankings without having to get links from external websites. Most people simply do not realize the sheer power of their own internal linking structure.

You can essentially, direct the flow of Google juice within your website.

You can improve your internal link structure through 3 main methods:

1. Text link navigation

2. Footers

3. Inline Text links (Contextual Links)

For example, if there are certain keyword phrases you want to rank for, you can create internal links to the page you want to rank high on Google. These links can be placed in the navigation, footer, breadcrumb links, or within the content.

You can go to to see some good examples of internal linking using contextual links within the content. They also use breadcrumb navigation to improve their internal link structure.

You can go to to see an example of how they’re using footer links to rank for their desired keyword phrases.

Keep in mind that contextual links (those that are placed within the text of a page) are more powerful than navigation and footer links. If you’re using WordPress, you can use a plugin for related posts to increasethe internal linking to relatd content. This is good for both your seasrch engine rankings as well as increasing your pageviews.

In addition, you can also use a plugin for preventing the duplicate content that is caused by WordPress blogs. The plugin is called the Duplicate Content Cure.

If you’d like more information about how to optimize your internal link structure, check out the following resources:

A video tutorial on internal link structure by Aaron Wall

A Guide to Internal Linking by Rand Fishkin of SeoMoz

A Guide to Internal Linking by Jim Boykin

SEO Fast Start Guide by Dan Thies

11. Convert your Articles into Podcasts

Another great way to build one-way backlinks is to convert your blog posts into podcasts. You can use free online software to automatically convert your blog posts into podcasts without having to do any extra work. By turning your blog posts into podcasts, you open up a ton of new link building opportunities.

One of my favorite tools,, will instantly give your blog a voice. Odiogo will automatically convert your RSS feeds, text articles and blog posts to iPod-ready audio files. Once you’ve installed their free service, you can then submit your new podcast to all of the podcast directories, generating tons of new backlinks to your site.

If you don’t have an RSS feed or a blog for your website, then you can use a service called Read The Words.

Read The Words can convert any web page, text file, Word document, or PDF file into an audio file that’s ready to be submitted to all the different podcast directories.

12. Building Links with Tickers!

Here’s one of my favorite link building tactics that I see very few people using…

Have you ever been to a forum and seen those countdown tickers that countdown to a particular date or event?

Here’s an example:

Some common tickers include pregnancy countdowns, wedding countdowns, weight loss countdowns, etc…

People use these tickers when posting to forums and on their blogs. Most people include the tickers in their signature file.

But here’s the cool part…

Every time they post to the forum with their ticker, the creator of the countdown ticker gets a link back to their site.

Now imagine if you started creating your own countdown tickers and started distributing them to your users and promoting them in popular forums. You could get thousands of one-way backlinks coming into your site.

If you’re thinking this would require some complicated programming, think again! You can create an unlimited number of countdown tickers with a piece of software located at

This software allows you to create customized graphical countdown tickers.

They have over 12 Ticker types built in, including:

Pregnancy Due Date Countdown

Pregnancy Due Date (When am I due?)


Baby and Child Age

Birthday Countdown

Trying to Conceive

Angel Remembrance

Weight Loss

Wedding Countdown

Anniversary Countdown

Adoption Countdown

Vacation Countdown

However, you can also create custom tickers to countdown to any event you choose. This is a great technique for increasing your backlinks, traffic, revenue, and website exposure. Every ticker you create is branded with your websites name and links back to your website.

Think about it… If you had just one active forum member using your ticker, that could equate to thousands of backlinks.

13. Our final link building technique is quite “out-of-the-box”.

It’s buying websites!

Yes, you can buy low-cost websites to get high-quality one-way backlinks. You can use the SitePoint
to find some great deals on established websites.

Buying a low-cost website not only gives you lots of powerful link building opportunities, it can also bring you a lot of extra traffic and subscribers.

Lots of people simply don’t know how to monetize their websites properly. You can often find quality websites getting hundreds of visitors a day that aren’t even collecting opt-ins. Imagine if you bought the site with your marketing knowledge, started collecting subscribers, and built a relationship with your list.

Congratulations on finishing this VERY long article. You are now well on your way to becoming a link building ninja. The only thing left for you to do is implement the powerful strategies and tactics you’ve learned. Remember, it’s all about practice!

Source: buzzblogger

Google declares war on Microsoft with Chrome OS

Google declares war on Microsoft with Chrome OS

SUN VALLEY, USA: Google Inc is declaring war on Microsoft Corp by seeking to unseat the software giant’s globally dominant Windows operating system for personal computers.

Google, which already offers a suite of e-mail, Web and other software products that compete with Microsoft, said on Tuesday it would launch a new operating system for computers ranging from ultra-compact netbooks to full-size desktop PCs.

Called the Google Chrome Operating System, the new software will be in netbooks for consumers in the second half of 2010, Google said in a blog post, adding that it was working with multiple manufacturers.

“It’s been part of their culture to go after and remove Microsoft as a major holder of technology, and this is part of their strategy to do it,” said Rob Enderle, principal analyst at Enderle Group. “This could be very disruptive. If they can execute, Microsoft is vulnerable to an attack like this, and they know it,” he said.

Google and Microsoft have often locked horns over the years in a variety of markets, from Internet search to mobile software. It remains to be seen if Google can take market share away from Microsoft on its home turf, with Windows currently installed in more than 90 percent of the world’s PCs.

Key to success will be whether Google can lock in partnerships with PC makers, such as Hewlett-Packard Co and Dell Inc, which currently offer Windows on most of their product lines.

Google’s Chrome Internet browser, launched in late 2008, remains a distant fourth in the Web browser market, with a 1.2 percent share in February, according to market research firm Net Applications. Microsoft’s Internet Explorer continues to dominate, with nearly 70 percent.

A spokesman for Microsoft had no immediate comment.

Fast and lightweight

The new Chrome OS is expected to work well with many of the company’s popular software applications, such as Gmail, Google Calendar and Google Maps.

It will be fast and lightweight, enabling users to access the Web in a few seconds, Google said. The new OS is based on open-source Linux code, which allows third-party developers to design compatible applications.

“The operating systems that browsers run on were designed in an era where there was no web,” Sundar Pichai, vice president of product management at Google, said in the blog post. The Chrome OS is “our attempt to re-think what operating systems should be”.

Google said Chrome OS was a new project, separate from its Android mobile operating software found in some smartphones.Acer Inc, the world’s No.3 PC brand, has already agreed to sell netbooks that run Android.

The new OS is designed to work with ARM and x86 chips, the main chip architectures in use today.

Charlene Li, partner at consulting company Altimeter Group, said Google’s new OS will initially appeal to consumers looking for a netbook-like device for Web surfing, rather than people who use desktop PCs for gaming or high-powered applications.

But eventually, the Google OS has the potential to scale up to larger, more powerful PCs — especially if it proves to run faster than Windows, she said.

Enderle expects Google to charge at most a nominal fee for the new OS, or make it free, saying the company’s business model has been to earn revenue off connecting applications or advertising.

Li added: “A benefit to the consumer is that the cost saving is passed on, not having to pay for an OS.”

“It’s clearly positioned as a shot across the bow of Microsoft,” she said.