So the question came up about whether tombstones should be included when calculating the load factor of a hash table.
I thought that, given that the load factor is used to determine when to expand capacity, tombstones should not be included. An obvious example is if you almost fill and then remove every value in a hash table. Here insertions are s

it1352
0
2020-11-22

If I use a HashSet with a initial capacity of 10 and a load factor of 0.5
then every 5 elements added the HashSet will be increased or first the HashSet
is increased of 10 elements and after at 15 at 20 atc. the capacity will be increased?
Solution The load factor is a measure of how full the HashSet is allowed to get before its capacity is automa

it1352
0
2019-05-22

I am to write a chained hash set class in java.
I understand the load factor is M/capacity where M is the number of elements currently in the table and capacity is the size of the table.
But how does the load factor help me determine if I should resize the table and rehash or not?
Also I couldn't find anywhere how to calculate the lower / upper

it1352
0
2020-11-22

HashMap has two important properties: size and load factor. I went through the Java documentation and it says 0.75f is the initial load factor. But I can't find the actual use of it.
Can someone describe what are the different scenarios where we need to set load factor and what are some sample ideal values for different cases?
Solution The docum

it1352
0
2019-05-19

In HashMap why threshold value (The next size value at which to resize) is capacity * load factor. Why not as equal to size or capacity of map.
For example initially default capacity = 16 , load factor = 0.75 and hence threshold = (capacity * load factor) = (16 * 0.75) = 12.
Map resize when we add 13th element why is it so, why author of map

it1352
0
2019-05-14

What values should I pass to create an efficient HashMap / HashMap based structures for N items?
In an ArrayList, the efficient number is N (N already assumes future grow). What should be the parameters for a HashMap? ((int)(N * 0.75d), 0.75d)? More? Less? What is the effect of changing the load factor?
Solution Regarding the load factor, I'll si

it1352
0
2019-05-14

If I have a key set of 1000, what is a suitable size for my Hash table, and how is that determined?
Solution It depends on the load factor (the "percent full" point where the table will increase its size and re-distribute its elements). If you know you have exactly 1000 entries, and that number will never change, you can just set the load factor

it1352
1
2019-05-19

When one creates a Map or a List in Java, they both have the same default initial capacity of 10. Their capacity grows as they get new elements. However, the List only grows when the 11th element is added and the Map grows already when the 8th element is added. It happens because the Map has a loadFactor field, which regulates how "saturated" it ca

it1352
0
2019-05-14

As given by Hadoop wiki to calculate ideal number of reducers is 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum)
but when to choose 0.95 and when 1.75? what is factor that considered while deciding this multiplier?
Solution Let's say that you have 100 reduce slots available in your cluster.
With a load factor of 0.95 all the 95 reduce

it1352
1
2019-05-19

We have an ASP.NET 4.0 MVC3 application running on F5 load balanced servers.
We received the exception below. We do not do multi-threading in our web application, but don't know if the F5 load balancing servers could be factoring into the equation. We see where the exception occurs on earlier versions of .NET (Most of the other posts deal with .NE

it1352
10
2019-05-06