This is another set of investment bank java interview questions. This interview was not attended by me but I got this these questions from one of my team mate, who passed these interviews to receive the IB job offer. Here the main focus area were again java collections, memory model, java 1.5 concurrency and design pattern.
Always remember your first impression is the most important and everybody know what is the first question in all interviews. It is 'tell me about yourself' or' tell me about current project'. Be well prepared for this questions which sets the tone for rest of your interview. If you are taking telephonic interview be very clear that your voice should be clear and there should not be any background noise.
1) Tell me about career profile and describe your one of the main project?
then deep drive into the current project.
Why u are using these technology like JMS, Concurrency, Spring in your application?
What feathers of java 1.6 you are using in your current application?
How you r managing concurrency in application?
How the exceptions are handled in your application?
2) Java collections - When to use ArrayList and Linked list? Java Collection
3) Application wants to load static data with limited memory foot print and it should be refreshed after regular interval. Implement LRU Cache ? LRU Cache Implementation
4) What is soft reference, weak reference, phantom reference?
A weak reference, simply put, is a reference that isn't strong enough to force an object to remain in memory. Weak references allow you to leverage the garbage collector's ability to determine reachability for you, so you don't have to do it yourself. It is used in java.util.WeakHashMap.
A soft reference is exactly like a weak reference, except that it is less eager to throw away the object to which it refers. An object which is only weakly reachable (the strongest references to it areWeakReferences) will be discarded at the next garbage collection cycle, but an object which is softly reachable will generally stick around for a while.
A phantom reference is quite different than either SoftReference or WeakReference. The object is marked for Garbage collection but it is finalised and have not yet reclaimed. The object is called as phantom reachable. Its grip on its object is so tenuous that you can't even retrieve the object -- its get() method always returns null. The only use for such a reference is keeping track of when it gets enqueued into a ReferenceQueue, as at that point you know the object to which it pointed is dead.
5) How is memory management in java? how the heap is divided in different area? Java Memory Mgmt
11) What are new concurrency classes in java 1.5? Java Concurrency
12) Explain Singleton design pattern and what is double check locking (DCL) and how to make singleton classes using volatile? Design Pattern [Singleton]
13) What is the difference between correlated subqueries and uncorrelated subqueries?
Uncorrelated subquery is that the subquery can be run independently of the outer query. Basically, the subquery has no relationship with the outer query.
example :
select * from employee where id in (select employee_id from department where dept_id=10);
Here sub query \ inner query is not dependent on outer query.
Correlated subquery has the opposite property – the subquery cannot be run independently of the outer query.
SELECT *FROM Employee Emp1
WHERE (1) = (
SELECT COUNT(DISTINCT(Emp2.Salary))
FROM Employee Emp2
WHERE Emp2.Salary > Emp1.Salary)
What you will notice in the correlated subquery above is that the inner subquery uses Emp1.Salary, but the alias Emp1 is created in the outer query. This is why it is called a correlated subquery, because the subquery references a value in it’s WHERE clause (in this case, it uses a column belonging to Emp1) that is used in the outer query. 14)If Parent class is Serializable, then Child class should be Serializable classs ?
yes, a subclass IS-A superclass, and hence child will also be IS-A Serializable too.
However, if the question is referring to an object actually being serializable, then maybe not, as it is possible for a subclass to provide readObject() and writeObject() methods, that thows a NotSerializableException.
Puzzle : 1) You are in one room at 5'th floor of building and it has 3 bulbs and the switch for these bulbs are in ground floor and you can go down only 1 time and Tell me how you know particular switch for each bulb? ,, tip --use bulb heating** 2) You have 1000 teams and each team plays knock out with each other, how many minimum matches we need to schedule to figure out winner? 3) Write the programme to get square root of 100? do not use java math Square root functions?
Management Round: 1) Tell me about yourself?
2) Why you want to leave current job?
3) Why you want to join this bank? ..[Get the history of bank and current CEO details and latest mergers]
4) Your key strengths and weakness?
5) Tell me how you were managing team?
HR Round:
1) Why would you like to leave current role?
2) What is your long term plan, how you would you like to growth your career?
[Answer of these Mgmt/HR Questions are listed here]Overall process took around 1.5 months from first round to final job offer. one more point there was onsite puzzle solving in first round using eclipse and we need to make Junit test cases to demonstrate the understanding of Test driven development. More interview questions from investment bank job interviews
'Java latest changes' or' 'Java 1.7 changes' are covered in this topic. Java 1.7 [ Dolphin] is major update after java 1.5 tiger release. It is done on 2011-07-28 after around 5 years of java release 1.6 [Mustang]. These are major changes in java 1.7 release and most important is definitely 'Auto closeable' of resources.
1) Autocloseable Try Statement defining Resources – Java 1.7 introduces all new try-with-resources statement using which declaration and initialization of one or more resources can happen. But only the resources that implement the interface “java.lang.AutoCloseable” can be declared. Example:
try (BufferedReader bufferedReader = new BufferedReader( FileReader(path)))
{ return bufferedReader.readLine();
}
In this code snippet, sampleBufferedReader instance is created within the try statement. Note that the example does not include a finally block that contains a code to close sampleBufferedReader as in Java 1.6 or earlier versions. Java 1.7 automatically closes the resources that are instantiated within the try-with-resources statement as shown above.
2) Catch Block Handling Multiple Exceptions – In Java 1.5 and Java 1.6, a catch block can handle only one type of exception. But in Java 1.7 and later versions, a single catch block can handle multiple exceptions. Here is an example showing catch blocks in Java 1.6:
try { }
catch(SQLException exp1)
{ throw exp1; }
catch(IOException exp2)
{ throw exp2; }
The same code snippet can be modified in Java 1.7 as:
3) String Object as Expression in Switch Statement – So far only integral types are used as expressions in switch statement. But Java 1.7 permits usage of String object as a valid expression. Example:
example:
case "CASE1":
System.out.println(“CASE1”);
break;
4) JDBC in Java 1.7
JDBC contained in Java 1.7 / Java SE 7 is JDBC 4.1 that is newly getting introduced. JDBC 4.1 is more efficient when compared to JDBC 4.0.
5) Language Enhancements in JDBC 1.7
Java 1.7 introduces many language enhancements:
Integral Types as Binary Literals – In Java 1.7 / Java SE 7, the integral types namely byte, short, int and long can also be expressed with the binary number system. To specify these integral types as binary literals, add the prefix 0B or 0b to number. For example, here is a byte literal represented as 8-bit binary number:
byte sampleByte = (byte)0b01001101;
Underscores Between Digits in Numeric Literal – In Java 1.7 and all later versions, “_” can be used in-between digits in any numeric literal. “_” can be used to group the digits similar to what “,” does when a bigger number is specified. But “_” can be specified only between digits and not in the beginning or end of the number. Example:
long NUMBER = 444_67_3459L;
In this example, the switch expression contains a string called sampleString. The value of this string is matched with every case label and when the string content matches with case label then the corresponding case gets executed.
Automatic Type Inference during the Generic Instance Creation – In Java 1.7 while creating a generic instance, empty parameters namely <> can be specified instead of specifying the exact type arguments. However, this is permitted only in cases where the compiler can infer the appropriate type arguments. For example, in Java 1.7 you can specify:
sampleMap = new HashMap<>();
Thus HashMap<> can be specified instead of HashMap>;. This <>; empty parameters of Java 1.7 are named as diamond operator.
6) Suppress Warnings - When declaring varargs method that includes parameterized types, if the body of the varargs method does not throw any exceptions like ClassCastException (which occurs due to improper handling of the varargs formal parameter) then the warnings can be suppressed in Java 1.7 by three different ways:
(1) Add annotation @SafeVarargs to static method declarations and non constructor method declarations
(2) Add annotation @SuppressWarnings({"unchecked", "varargs"}) to varargs method declaration
(3) Directly use compiler option “-Xlint:varargs.
By suppressing the warnings in varargs method, occurrence of unchecked warnings can be prevented at compile time thus preventing Heap Pollution.
7) Java Virtual Machine Enhancements in Java 1.7
Java SE 7 / Java 1.7 newly introduce a JVM instruction called “invokedynamic” instruction. Using “invokedynamic” instruction, the dynamic types programming language implementation becomes simpler. Thus Java 1.7 enables JVM support for the non-java languages.
Getting a new job looks very tedious work and it needs the preparation. better you are prepared important topics and you have more chances of winning the interview. It is not just chance, its your hard work which makes way for you. After long time, again I started giving interviews for my next move. This time I am looking for more senior role similar to technical manager but still I need to clear java technical round and here are those questions:
**Wining Attitude is the Key for interviews**
Questions :- 1) Your Current project and Role\ Responsibilities? 2) Explain your current application and it's design? 3) Explain JMS Transaction and how you can control transaction from you receive the message to persisting data into DB? 4) How to handle DB transaction in Spring? 5) What is benefits of Spring? 6) HashMap VS TreeMap which one to use, how to decide? 7) How to create immutable Objects? 8) What is annotation and what is benefits? 9) How JIXB xml parsing works? 10) Threading, what happens if same object instance is read by two threads? 11) If one reader thread and one writer thread act on one objects double variable and writer thread is trying to update the value from 4 to 5. if both does this call at same time what value reader thread will see for double variable? [ Hint double writing is not atomic in java] 12) If you have java application cache where you have 1000 reads but very less writes into cache, which locking you will you and how? tell me the class name? Suddenly interview moved to web application technology side and they asked these questions: 1) What is application context in web app? 2) What dispatcher servlet do in Spring mvc? Two design discussion 1) Large file having text and how we can find the word count in the file and which collection to use? 2) We have Professor , Course and Student table and one professor can take multiple course and one student can take multiple courses. How to find out which professor is having maximum students write query and table design? Best of luck for any upcoming interviews. Important topic to cover for interview - 1) JAVA 1.5 details 2) Java Collection Interview questions 3) XML PArsing 4) JDBC Interview Questions 5) Hibernate interview questions 6) Java Threading questions 7) Java Concurrency questions. 8) JMS interview questions. 9) Spring Framework
This Interview was scheduled on Aug,2011 with one of the BIG 4 US banks. The positions was related to senior java developer and they were looking for strong java candidate with problem solving skills. They had a Rule based java application on which they were facing performance issue. The application was using Hibernate,Spring, Java 1.5 and JMS with Oracle database.
1'st Round ) It was Quick fire round with 20 minutes telephonic interviews.
Technical questions:
Benefits of using Hibernate in application?
How to manage transaction in Hibernate and what all are transaction attributes?
How to avoid phantom read using in Database operations?
What is Queue and Topic and How to manage transaction in JMS?
What is garbage collection?
What need to be taken care on Java performance?
How to do application profiling?
What is database index and what is cluster index?
Difference in Database functions\procedure and trigger?
2) Second Round tech. interview
It was second round face to face technical interview and it was totally focused on hibernate. I was not sure why interviewee is just asking questions on Hibernate.
What is cascade all in hibernate setting?
how to maintain same object in two hibernate sessions?
what is Session.merge()?
what is datasource and what is benefits?
what is transaction management?
what is DurableSubscriber?
How to avoid circular dependency?
How to do transaction management ?
How hashmap get and set works?
How to design the big application where lots of request comes a same time and your application should able to handle all requests? why we should use JMS and what are the other alternative?
diff in runnable and callable?
diff in ArrayBlockingQueue, ConcurrentLinkedQueue?
How the ExecutorService works?
Have you used concurrent hashmap?
How the task are submitted in executor framework?
Java 1.5 concurrency package benefit?
what is Drul rule engine ?
My second round went well , except deep questions in hibernate . I was able to give answers but they were not satisfied with my answers on hibernate as last time I worked in hibernate was 2 years before and finally got rejected here.
These are top 10 things we need to take care during face to face technical interview. Very few people are master in this and with these basic skills they clear the technical interviews very easily. First thing we need to understand that the interviewer are most of time are not prepared for very specific questions so they look for basic things, which can be your confidence, your dressing or your attitude in interview. These soft skills can be improved with practice.
1) Practice Good Nonverbal Communication
It's about demonstrating confidence: standing straight, making eye contact and connecting with a good, firm handshake. That first impression can be a great beginning -- or quick ending -- to your interview.
2) Dress for the Job or Company
Today's casual dress codes do not give you permission to dress as "they" do when you interview. It is important to look professional and well-groomed. Whether you wear a suit or something less formal depends on the company culture and the position you are seeking. If possible, call to find out about the company dress code before the interview.
3) Listen
From the very beginning of the interview, your interviewer is giving you information, either directly or indirectly. If you are not hearing it, you are missing a major opportunity. Good communication skills include listening and letting the person know you heard what he said. Observe your interviewer, and match that style and pace.
4) Don't Talk Too Much
Telling the interviewer more than he needs to know could be a fatal mistake. When you have not prepared ahead of time, you may tend to ramble, sometimes talking yourself right out of the job. Prepare for the interview by reading through the job posting, matching your skills with the position's requirements and relating only that information.
5) Don't Be Too Familiar
The interview is a professional meeting to talk business. This is not about making a new friend. Your level of familiarity should mimic the interviewer's demeanour. It is important to bring energy and enthusiasm to the interview and to ask questions, but do not overstep your place as a candidate looking for a job.
6) Use Appropriate Technical Language
It's a given that you should use technical language during the interview. Be aware of any inappropriate slang words or references to age, race, religion, politics or sexual orientation -- these topics could send you out the door very quickly.
7) Show Attitude
Attitude plays a key role in your interview success. There is a fine balance between confidence, professionalism and modesty. Even if you're putting on a performance to demonstrate your ability, overconfidence is as bad, if not worse, as being too reserved.
8)Take Care to Answer the Questions
When an interviewer asks for an example of a time when you did something, he is seeking a sample of your past behaviour. If you fail to relate a specific example, you not only don't answer the question, but you also miss an opportunity to prove your ability and talk about your skills.
9) Ask Questions
When asked if they have any questions, most candidates answer, "No." Wrong answer. It is extremely important to ask questions to demonstrate an interest in what goes on in the company. Asking questions also gives you the opportunity to find out if this is the right place for you. The best questions come from listening to what is asked during the interview and asking for additional information.
10) Don't show Desperation
When you interview with the "please, please hire me" approach, you appear desperate and less confident. Maintain the three C's during the interview: cool, calm and confident. You know you can do the job; make sure the interviewer believes you can, too.
General Questions
1. Tell us about yourself? Keep your answer very simple and brief. Don't prolong by telling your family history and schooling. Be brief and focus more on your skills initiatives and adaptability.
2. Why should we hire you? Because I have all the attributes that this role requires. Knowledge, experience, skills and abilities. You need to be confident while replying and no vague answers.
3. Why are you looking for a change? or Why do you want to leave your company? Be positive while answering. You want to work with a company where you can make a long term career. Where you can use your skills and learn new skills. Be honest if there was any retrenchment in the previous company e.g. Our Department was consolidated or eliminated.
4. What are your strengths? Your strengths should relate to the company and job opening. I have a proven track record as an achiever.Positive attitude, good sense of humour, good communication skills, dedicated, team player, willingness to walk the extramile to achieve excellence etc. Your answer should highlight the qualities that will help you succeed in this particular job. (Back up each point with something specific). Give examples and quantify how your strengths benefited your previous employers. You should also demonstrate reliability, and the ability to stick with a difficult task yet change courses rapidly when required.
5. What are your weaknesses? Never say you do not have any weak points. Try not to reveal your personal characteristics. I often get impatient with others sloppy work.
The best “weaknesses” are disguised as strengths, such as “I dislike not being challenged at work”. Another good approach is to mention a weakness that is irrelevant for the job or one that can be overcome with training. Try to keep these to one weakness, explaining why you think it is a weakness and what you are doing to overcome the problem – a well thought out strategy you have developed to deal with the issue will turn this potentially tricky question into a positive.
One common variation on this question is to ask about any problems or failures you’ve encountered in previous positions. In describing problems, pick ones you’ve solved and describe how you overcame it. Show yourself to be a good team player by crediting co-workers for all their contributions. To distance yourself from failure, pick one that occurred earlier in your career when you were still learning. Don’t blame others – simply explain how you analysed your mistake and learned from it.
6. What challenges did you face in your previous jobs? Getting things planned and done on time within the budget. Quote any example that you have experienced.
7. How will you motivate your team? Bottom line is do it show it and inspire. Involve all the members in the ongoing development and progress of the company. Communicate and interact with the team members. They want regular updates on their personal performance. Keep them updated.Celebrate individual and team performance. Catch people doing something right and focus on recognising excellent performance.Set challenging goals. Team will work hard to accomplish them. Believe in your people. Majority of them want to perform.Motivate employes for the next level. Consistent and transparent with the team. Let the members know why doing an assigned task is important to you the Organisation and them.Set the example for others to follow.
8. “What’s the worst problem you’ve ever faced?”
Here the interviewer is offering you the two ways to trip yourself up:
First of all, the question doesn’t confine itself to the workplace, so there is temptation to reveal a personal problem. Don’t! Restrict yourself to employment matters only.
Second, you are being asked to reveal a weakness or error again. You must have a good response ready for this question, one which shows how well you reacted when everything depended on it.
Always show a problem you have solved and concentrate your answer on the solution not the problem.
9. “How would you describe a typical day in your current job?”
You are eager to look good but don’t make the common mistake of exaggerating your current position. Mentioning some of the routine tasks in your day adds realism to your description and show that you don’t neglect important details such as paperwork. Put yourself in the interviewer’s place as your answer. When you’ve been doing a job for years it becomes second nature to you, and you must be aware of all the tasks you undertake. You should spend a few days making notes of your activities at work to regain an outsider’s perspective. Try to show that you make good use of your time, that you plan before you begin your work and that you review your achievements at the end of it
'Java Consumer Producer' example - This is one of frequently asked questions to senior core java developer. Java concurrency producer and consumer solution is demonstrated below. Java Concurrency Queuing options: The java concurrent executors and task holding queues can be configured in three ways-
Direct handoffs : A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.
Unbounded queues : Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
Bounded queues : A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.
Example of PriorityBlockingQueue and see how the comparable is used with priority of task which depends on implementation of compare method in task. From Java Docs- The Blocking queue here is bounded with 11 as initial capacity as default constructor. The queue is based on priority and least priority data is processed " An unbounded blocking queue that uses the same ordering rules as class PriorityQueue and supplies blocking retrieval operations. While this queue is logically unbounded, attempted additions may fail due to resource exhaustion (causing OutOfMemoryError). This class does not permit null elements. A priority queue relying on natural ordering also does not permit insertion of non-comparable objects (doing so results in ClassCastException). This class and its iterator implement all of the optional methods of the Collection and Iterator interfaces. The Iterator provided in method iterator() is not guaranteed to traverse the elements of the PriorityBlockingQueue in any particular order. If you need ordered traversal, consider using Arrays.sort(pq.toArray()). Also, methoddrainTo can be used to remove some or all elements in priority order and place them in another collection."
Output Consumed Data [number=2, name=two] producer 0 Consumed Data [number=10, name=ten] producer 1 Consumed Data [number=20, name=twenty] producer 2 Consumed Data [number=0, name=0] producer 3 Consumed Data [number=1, name=1] producer 4 Consumed Data [number=2, name=2] producer 5 Consumed Data [number=3, name=3] Consumer Producer Solution with Synchronisation:
This implementation with Synchronisation needs great care and it is more complicated in implementation:
Output
The output will confim that there is no concurrency issue on putting data into same queue and wait() and notify() works perfectly fine. Got: 53613 Put: 53614 Got: 53614 Put: 53615 Got: 53615 Put: 53616 Got: 53616 Put: 53617 Got: 53617
Java Concurrency interview question- In year 2004 when technology gurus said innovation in Java is gone down and Sun Microsystems [Now Oracle] came with the Tiger release with very important changes with in java 1.5 and most important feature was Concurrency and programming features. This is hot topic for java interviews from past few years. Interviewers are mainly focused on java 1.5 concurrent package and they can ask how to use these changes and what are the benefits. They will focus on how concurrency is better than synchronisation and how executors are better than old java thread implementation? How you can avoid locks ? and What is java memory model changes for volatile etc. In investment banks we need to work on multithreaded applications due to high volume so clear understanding of this topic is very important. This topic is key for clearing any core java interview.
Most of these features are implemented in the new java.util.concurrent packages. There are also new concurrent data structures in the Java Collections Framework.
Lock objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
The Java programming language provides two basic synchronization idioms: synchronized methods and synchronized statements. The more complex of the two, synchronized statements, are described in the next section. This section is about synchronized methods.
To make a method synchronized, simply add the synchronized keyword to its declaration:
public class SynchronizedCounter { private int c = 0; public synchronized void increment() { c++; } public synchronized void decrement() { c--; } public synchronized int value() { return c; } }
If count is an instance of SynchronizedCounter, then making these methods synchronized has two effects:
• First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
• Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
Note that constructors cannot be synchronized — using the synchronized keyword with a constructor is a syntax error. Synchronizing constructors doesn't make sense, because only the thread that creates an object should have access to it while it is being constructed.
2. What is Intrinsic Locks and Synchronization ?
Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a "monitor.") Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object's state and establishing happens-before relationships that are essential to visibility.
Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and then release the intrinsic lock when it's done with them. A thread is said to own the intrinsic lock between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock.
When a thread releases an intrinsic lock, a happens-before relationship is established between that action and any subsequent acquistion of the same lock.
3. What is Reentrant Synchronization ?
Recall that a thread cannot acquire a lock owned by another thread. But a thread can acquire a lock that it already owns. Allowing a thread to acquire the same lock more than once enables reentrant synchronization. This describes a situation where synchronized code, directly or indirectly, invokes a method that also contains synchronized code, and both sets of code use the same lock. Without reentrant synchronization, synchronized code would have to take many additional precautions to avoid having a thread cause itself to block.
4. What is Deadlock ?
Deadlock describes a situation where two or more threads are blocked forever, waiting for each other. Here's an example.
Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time. It will create deadlock between them.
public class DeadlockS { static class Friend { private final String name; public Friend(String name) { this.name = name; } public String getName() { return this.name; } public synchronized void bow(Friend bower) { System.out.format("%s: %s"+ " has bowed to me!%n", this.name, bower.getName()); bower.bowBack(this); } public synchronized void bowBack(Friend bower) { System.out.format("%s: %s" + " has bowed back to me!%n", this.name, bower.getName()); } }
public static void main(String[] args) { final Friend alphonse = new Friend("Alphonse"); final Friend gaston = new Friend("Gaston"); new Thread(new Runnable() { public void run() { alphonse.bow(gaston); } }).start(); new Thread(new Runnable() { public void run() { gaston.bow(alphonse); } }).start(); } }
5. What is Starvation and Livelock ?
Starvation
Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by "greedy" threads. For example, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.
Livelock
A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gaston moves to his left. They're still blocking each other, so...
6. What is Immutable Objects ?
An object is considered immutable if its state cannot change after it is constructed. Maximum reliance on immutable objects is widely accepted as a sound strategy for creating simple, reliable code.
Immutable objects are particularly useful in concurrent applications. Since they cannot change state, they cannot be corrupted by thread interference or observed in an inconsistent state.
Programmers are often reluctant to employ immutable objects, because they worry about the cost of creating a new object as opposed to updating an object in place. The impact of object creation is often overestimated, and can be offset by some of the efficiencies associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.
The following subsections take a class whose instances are mutable and derives a class with immutable instances from it. In so doing, they give general rules for this kind of conversion and demonstrate some of the advantages of immutable objects.
7. What should be Strategy for Defining Immutable Objects ?
The following rules define a simple strategy for creating immutable objects. Not all classes documented as "immutable" follow these rules. This does not necessarily mean the creators of these classes were sloppy — they may have good reason for believing that instances of their classes never change after construction. However, such strategies require sophisticated analysis and are not for beginners.
1. Don't provide "setter" methods — methods that modify fields or objects referred to by fields.
2. Make all fields final and private.
3. Don't allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods.
4. If the instance fields include references to mutable objects, don't allow those objects to be changed:
5. Don't provide methods that modify the mutable objects.
6. Don't share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.
8. What are High Level Concurrency Objects ?
So far, this we have focused on the low-level APIs that have been part of the Java platform from the very beginning. These APIs are adequate for very basic tasks, but higher-level building blocks are needed for more advanced tasks. This is especially true for massively concurrent applications that fully exploit today's multiprocessor and multi-core systems.
In this section we'll look at some of the high-level concurrency features introduced with version 5.0 of the Java platform. Most of these features are implemented in the new java.util.concurrent packages. There are also new concurrent data structures in the Java Collections Framework.
• Lock objects support locking idioms that simplify many concurrent applications.
• Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
• Concurrent collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
• Atomic variables have features that minimize synchronization and help avoid memory consistency errors.
• ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads
9. What is Executors ?
In large-scale applications, it makes sense to separate thread management and creation from the rest of the application. Objects that encapsulate these functions are known as executors. The following subsections describe executors in detail.
• Executor Interfaces define the three executor object types.
• Thread Pools are the most common kind of executor implementation.
• Fork/Join is a framework (new in JDK 7) for taking advantage of multiple processors.
Executor Interfaces
The java.util.concurrent package defines three executor interfaces:
• Executor, a simple interface that supports launching new tasks.
• ExecutorService, a subinterface of Executor, which adds features that help manage the lifecycle, both of the individual tasks and of the executor itself.
• ScheduledExecutorService, a subinterface of ExecutorService, supports future and/or periodic execution of tasks.
Typically, variables that refer to executor objects are declared as one of these three interface types, not with an executor class type.
public class ThreadWithResultExample { static ExecutorService exec = Executors.newCachedThreadPool(); public static void main(String...strings){ Future result = exec.submit(new Worker()); try { System.out.println(result.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } exec.shutdown(); } }
class Worker implements Callable { @Override public String call() throws Exception { return (String) "result"; }
10. What is Thread Pools ?
Most of the executor implementations in java.util.concurrent use thread pools, which consist of worker threads. This kind of thread exists separately from the Runnable and Callable tasks it executes and is often used to execute multiple tasks.Using worker threads minimizes the overhead due to thread creation. Thread objects use a significant amount of memory, and in a large-scale application, allocating and deallocating many thread objects creates a significant memory management overhead.One common type of thread pool is the fixed thread pool. This type of pool always has a specified number of threads running; if a thread is somehow terminated while it is still in use, it is automatically replaced with a new thread. Tasks are submitted to the pool via an internal queue, which holds extra tasks whenever there are more active tasks than threads.
An important advantage of the fixed thread pool is that applications using it degrade gracefully. To understand this, consider a web server application where each HTTP request is handled by a separate thread. If the application simply creates a new thread for every new HTTP request, and the system receives more requests than it can handle immediately, the application will suddenly stop responding to all requests when the overhead of all those threads exceed the capacity of the system. With a limit on the number of the threads that can be created, the application will not be servicing HTTP requests as quickly as they come in, but it will be servicing them as quickly as the system can sustain.
A simple way to create an executor that uses a fixed thread pool is to invoke the newFixedThreadPool factory method in java.util.concurrent.Executors This class also provides the following factory methods:
• The newCachedThreadPool method creates an executor with an expandable thread pool. This executor is suitable for applications that launch many short-lived tasks.
• The newSingleThreadExecutor method creates an executor that executes a single task at a time.
• Several factory methods are ScheduledExecutorService versions of the above executors.
If none of the executors provided by the above factory methods meet your needs, constructing instances of java.util.concurrent.ThreadPoolExecutor or java.util.concurrent.ScheduledThreadPoolExecutor will give you additional options.
11. What is Fork/Join ?
New in the Java SE 7 release, the fork/join framework is an implementation of the ExecutorService interface that helps you take advantage of multiple processors. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to make your application wicked fast.
As with any ExecutorService, the fork/join framework distributes tasks to worker threads in a thread pool. The fork/join framework is distinct because it uses a work-stealing algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy.
The center of the fork/join framework is the ForkJoinPoolclass, an extension of AbstractExecutorService. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTasks.
Basic Use
Using the fork/join framework is simple. The first step is to write some code that performs a segment of the work. Your code should look similar to this:
if (my portion of the work is small enough)
do the work directly
else
split my work into two pieces
invoke the two pieces and wait for the results
Wrap this code as a ForkJoinTask subclass, typically as one of its more specialized types RecursiveTask(which can return a result) or RecursiveAction.
After your ForkJoinTask is ready, create one that represents all the work to be done and pass it to the invoke() method of a ForkJoinPool instance.
12. What is Concurrent Collections ?
The java.util.concurrent package includes a number of additions to the Java Collections Framework. These are most easily categorized by the collection interfaces provided:
• BlockingQueue defines a first-in-first-out data structure that blocks or times out when you attempt to add to a full queue, or retrieve from an empty queue.
• ConcurrentMap is a subinterface of java.util.Map that defines useful atomic operations. These operations remove or replace a key-value pair only if the key is present, or add a key-value pair only if the key is absent. Making these operations atomic helps avoid synchronization. The standard general-purpose implementation of ConcurrentMap is ConcurrentHashMap, which is a concurrent analog of HashMap.
• ConcurrentNavigableMap is a subinterface of ConcurrentMap that supports approximate matches. The standard general-purpose implementation of ConcurrentNavigableMap is ConcurrentSkipListMap, which is a concurrent analog of TreeMap.
All of these collections help avoid Memory Consistency Errors by defining a happens-before relationship between an operation that adds an object to the collection with subsequent operations that access or remove that object.
package com.learning.thread;
import java.util.concurrent.ArrayBlockingQueue;
public class MyArrayBlockingQueue { ArrayBlockingQueue abq = new ArrayBlockingQueue(10,true); //Size 10 and fair policy String getData(){ return abq.poll(); } void setData(String e){ abq.add(e); }
public static void main(String...strings){ final MyArrayBlockingQueue queue = new MyArrayBlockingQueue(); // Iterate like collection for(String s: queue.abq){ System.out.println(s); }
// Data producer new Thread( new Runnable(){ @Override public void run() { for(int i = 0; i<10; i++){ queue.setData( String.valueOf(i)); } } } ).start(); // Consumer new Thread( new Runnable(){ @Override public void run() { for(int i = 0; i<10; i++){ System.out.println(queue.getData()); } } } ).start();
} }
13. Can you pass a Thread object to Executor.execute? Would such an invocation make sense? Why or why not ?
Thread implements the Runnable interface, so you can pass an instance of Thread to Executor.execute. However it doesn't make sense to use Thread objects this way. If the object is directly instantiated from Thread, its run method doesn't do anything. You can define a subclass of Thread with a useful run method — but such a class would implement features that the executor would not use.
14. What is BlockingQueue ?
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
BlockingQueue methods come in four forms, with different ways of handling operations that cannot be satisfied immediately, but may be satisfied at some point in the future: one throws an exception, the second returns a special value (either null or false, depending on the operation), the third blocks the current thread indefinitely until the operation can succeed, and the fourth blocks for only a given maximum time limit before giving up
A BlockingQueue does not accept null elements. Implementations throw NullPointerException on attempts to add, put or offer a null. A null is used as a sentinel value to indicate failure of poll operations.
A BlockingQueue may be capacity bounded. At any given time it may have a remainingCapacity beyond which no additional elements can be put without blocking. A BlockingQueue without any intrinsic capacity constraints always reports a remaining capacity of Integer.MAX_VALUE.
BlockingQueue implementations are designed to be used primarily for producer-consumer queues, but additionally support the Collection interface. So, for example, it is possible to remove an arbitrary element from a queue using remove(x). However, such operations are in general not performed very efficiently, and are intended for only occasional use, such as when a queued message is cancelled.
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically unless specified otherwise in an implementation. So it is possible, for example, for addAll(c) to fail (throwing an exception) after adding only some of the elements in c.
15. Describe ArrayBlockingQueue, DelayQueue and LinkedBlockingQueue ?
A bounded blocking queue backed by an array. This queue orders elements FIFO (first-in-first-out). The head of the queue is that element that has been on the queue the longest time. The tail of the queue is that element that has been on the queue the shortest time. New elements are inserted at the tail of the queue, and the queue retrieval operations obtain elements at the head of the queue.
This is a classic "bounded buffer", in which a fixed-sized array holds elements inserted by producers and extracted by consumers. Once created, the capacity cannot be changed. Attempts to put an element into a full queue will result in the operation blocking; attempts to take an element from an empty queue will similarly block.
This class supports an optional fairness policy for ordering waiting producer and consumer threads. By default, this ordering is not guaranteed. However, a queue constructed with fairness set to true grants threads access in FIFO order. Fairness generally decreases throughput but reduces variability and avoids starvation.
An unbounded blocking queue of Delayed elements, in which an element can only be taken when its delay has expired. The head of the queue is that Delayed element whose delay expired furthest in the past. If no delay has expired there is no head and poll will return null. Expiration occurs when an element's getDelay(TimeUnit.NANOSECONDS) method returns a value less than or equal to zero. Even though unexpired elements cannot be removed using take or poll, they are otherwise treated as normal elements. For example, the size method returns the count of both expired and unexpired elements. This queue does not permit null elements.
This class and its iterator implement all of the optional methods of the Collection and Iterator interfaces.
An optionally-bounded blocking queue based on linked nodes. This queue orders elements FIFO (first-in-first-out). The head of the queue is that element that has been on the queue the longest time. The tail of the queue is that element that has been on the queue the shortest time. New elements are inserted at the tail of the queue, and the queue retrieval operations obtain elements at the head of the queue. Linked queues typically have higher throughput than array-based queues but less predictable performance in most concurrent applications. The optional capacity bound constructor argument serves as a way to prevent excessive queue expansion. The capacity, if unspecified, is equal to Integer.MAX_VALUE. Linked nodes are dynamically created upon each insertion unless this would bring the queue above capacity.
This class and its iterator implement all of the optional methods of the Collection and Iterator interfaces.
Deque - A linear collection that supports element insertion and removal at both ends. The name deque is short for "double ended queue" and is usually pronounced "deck". Most Deque implementations place no fixed limits on the number of elements they may contain, but this interface supports capacity-restricted deques as well as those with no fixed size limit. This interface defines methods to access the elements at both ends of the deque. Methods are provided to insert, remove, and examine the element. Each of these methods exists in two forms: one throws an exception if the operation fails, the other returns a special value (either null or false, depending on the operation). The latter form of the insert operation is designed specifically for use with capacity-restricted Deque implementations; in most implementations, insert operations cannot fail.
A BlockingDeque that additionally supports blocking operations that wait for the deque to become non-empty when retrieving an element, and wait for space to become available in the deque when storing an element. BlockingDeque methods come in four forms, with different ways of handling operations that cannot be satisfied immediately, but may be satisfied at some point in the future: one throws an exception, the second returns a special value (either null or false, depending on the operation), the third blocks the current thread indefinitely until the operation can succeed, and the fourth blocks for only a given maximum time limit before giving up.
17. What is Semaphore ?
A counting semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.
18. What is CountDownLatch ?
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using a CyclicBarrier.
A CountDownLatch is a versatile synchronization tool and can be used for a number of purposes. A CountDownLatch initialized with a count of one serves as a simple on/off latch, or gate: all threads invoking await wait at the gate until it is opened by a thread invoking countDown(). A CountDownLatch initialized to N can be used to make one thread wait until N threads have completed some action, or some action has been completed N times.
A useful property of a CountDownLatch is that it doesn't require that threads calling countDown wait for the count to reach zero before proceeding, it simply prevents any thread from proceeding past an await until all threads could pass. Sample usage: Here is a pair of classes in which a group of worker threads use two countdown latches: • The first is a start signal that prevents any worker from proceeding until the driver is ready for them to proceed; • The second is a completion signal that allows the driver to wait until all workers have completed.
package com.learning.thread;
import java.util.concurrent.CountDownLatch;
public class LatchTest { private static final int COUNT = 10;
public void run() { try { startLatch.await(); // wait until the latch has counted down to // zero } catch (InterruptedException ex) { ex.printStackTrace(); } System.out.println("Running: " + name); stopLatch.countDown(); } }
public static void main(String args[]) { // CountDownLatch(int count) // Constructs a CountDownLatch initialized with the given count. CountDownLatch startSignal = new CountDownLatch(1); CountDownLatch stopSignal = new CountDownLatch(COUNT); for (int i = 0; i < COUNT; i++) { new Thread(new Worker(startSignal, stopSignal, Integer.toString(i))) .start(); } System.out.println("Go"); startSignal.countDown(); try { stopSignal.await(); } catch (InterruptedException ex) { ex.printStackTrace(); } System.out.println("Done"); } }
19. What is CyclicBarrier ?
A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point. CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released. A CyclicBarrier supports an optional Runnable command that is run once per barrier point, after the last thread in the party arrives, but before any threads are released. This barrier action is useful for updating shared-state before any of the parties continue.
A hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable. However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details.
Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove). Retrievals reflect the results of the most recently completed update operations holding upon their onset. For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries. Similarly, Iterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration. They do not throw ConcurrentModificationException. However, iterators are designed to be used by only one thread at a time.
The allowed concurrency among update operations is guided by the optional concurrencyLevel constructor argument (default 16), which is used as a hint for internal sizing. The table is internally partitioned to try to permit the indicated number of concurrent updates without contention. Because placement in hash tables is essentially random, the actual concurrency will vary. Ideally, you should choose a value to accommodate as many threads as will ever concurrently modify the table. Using a significantly higher value than you need can waste space and time, and a significantly lower value can lead to thread contention. But overestimates and underestimates within an order of magnitude do not usually have much noticeable impact. A value of one is appropriate when it is known that only one thread will modify and all others will only read. Also, resizing this or any other kind of hash table is a relatively slow operation, so, when possible, it is a good idea to provide estimates of expected table sizes in constructors.
This class and its views and iterators implement all of the optional methods of the Map and Iterator interfaces. Like Hash table but unlike HashMap, this class does not allow null to be used as a key or value.
21) What is ThreadGroup and it's Use?
A thread group represents a set of threads. In addition, a thread group can also include other thread groups. The thread groups form a tree in which every thread group except the initial thread group has a parent.
A thread is allowed to access information about its own thread group, but not to access information about its thread group's parent thread group or any other thread groups.
For Example Application servers maintains Thread Group for co-related similar type of threads.
22) What is LocalThread and its use?
This class provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its get or set method) has its own, independently initialized copy of the variable. ThreadLocal instances are typically private static fields in classes that wish to associate state with a thread (e.g., a user ID or Transaction ID).
For example, the class below generates unique identifiers local to each thread. A thread's id is assigned the first time it invokes UniqueThreadIdGenerator.getCurrentThreadId() and remains unchanged on subsequent calls.
import java.util.concurrent.atomic.AtomicInteger;
public class UniqueThreadIdGenerator {
private static final AtomicInteger uniqueId = new AtomicInteger(0);
private static final ThreadLocal < Integer > uniqueNum =
new ThreadLocal < Integer > () {
@Override protected Integer initialValue() {
return uniqueId.getAndIncrement();
}
};
public static int getCurrentThreadId() {
return uniqueId.get();
}
} // UniqueThreadIdGenerator
Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the ThreadLocal instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).
23) Java memory model for double and long and what care we need to take to use these in multithreaded environment?
Java Double and long are 64 bits characters and in 32 bit machines read and write of double and long is not atomic operation. This operation is two steps first reading first part [32 bit] and the reading the second part [32 bit] so we need to take extra care when we are reading and writing the double and long variable in multi threaded application.
24) What is CompletionService or ExecutorCompletionService? A service that decouples the production of new asynchronous tasks from the consumption of the results of completed tasks. Producers submit tasks for execution. Consumers take completed tasks and process their results in the order they complete. A CompletionService can for example be used to manage asynchronous IO, in which tasks that perform reads are submitted in one part of a program or system, and then acted upon in a different part of the program when the reads complete, possibly in a different order than they were requested. Typically, a CompletionService relies on a separate Executor to actually execute the tasks, in which case the CompletionService only manages an internal completion queue. The ExecutorCompletionService class provides an implementation of this approach.
25) What is special about concurrent.atomic package? In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
This method (which varies in argument types across different classes) atomically sets a variable to the updateValue if it currently holds the expectedValue, reporting true on success. The classes in this package also contain methods to get and unconditionally set values, as well as a weaker conditional atomic update operation weakCompareAndSet. The weak version may be more efficient in the normal case, but differs in that any given invocation of weakCompareAndSet method may fail, even spuriously (that is, for no apparent reason). A false return means only that the operation may be retried if desired, relying on the guarantee that repeated invocation when the variable holds expectedValue and no other thread is also attempting to set the variable will eventually succeed. The specifications of these methods enable implementations to employ efficient machine-level atomic instructions that are available on contemporary processors. However on some platforms, support may entail some form of internal locking. Thus the methods are not strictly guaranteed to be non-blocking -- a thread may block transiently before performing the operation. Instances of classes AtomicBoolean, AtomicInteger, AtomicLong, and AtomicReference each provide access and updates to a single variable of the corresponding type. Each class also provides appropriate utility methods for that type. For example, classes AtomicLongand AtomicInteger provide atomic increment methods. One application is to generate sequence numbers, as in:
class Sequencer { private AtomicLong sequenceNumber = new AtomicLong(0); public long next() { return sequenceNumber.getAndIncrement(); } }
The memory effects for accesses and updates of atomics generally follow the rules for volatiles: - get has the memory effects of reading a volatile variable. - set has the memory effects of writing (assigning) a volatile variable. - weakCompareAndSet atomically reads and conditionally writes a variable, is ordered with respect to other memory operations on that variable, but otherwise acts as an ordinary non-volatile memory operation. - compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
26) Explain ReentrantReadWriteLock ?
This is special Lock implementation which is having separate lock for reading and writing. The lock can be upgraded from ReadLock to WriteLock but reverse is not possible. This is very useful in case where you have more reads compare to write so read operation in not locked. This Lock implementation supports condition.