ConcurrentHashMap
ConcurrentHashMap使用的是分段锁技术,将ConcurrentHashMap将锁一段一段的存储,然后给每一段数据配一把锁(segment),当一个线程占用一把锁(segment)访问其中一段数据的时候,其他段的数据也能被其它的线程访问,默认分配16个segment。默认比Hashtable效率提高16倍。
ConcurrentHashMap是Map的一种并发实现,在该类中元素的read操作都是无锁了,而write操作需要被同步。这非常适合于读操作远大于写操作的情况。在实现过程中,ConcurrentHashMap将所有元素分成了若干个segment,每个segment是独立的,在一个segment上加锁并不影响其他segment的操作。segment本身是一个hashtable,对于一个加入ConcurrentHashMap的<key, value>对,key的hash值中的高位被用来索引segment,而低位用于segment中的索引。
scala> import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.ConcurrentHashMap
scala> val m=new ConcurrentHashMap[String, String]()
val m: java.util.concurrent.ConcurrentHashMap[String,String] = {}
scala> m.isEmpty
val res2: Boolean = true
scala> m.
!=( (universal) equals( notify() (universal) reduceValues( ## (universal) forEach( notifyAll() (universal) reduceValuesToDouble( +( (deprecated) forEachEntry( put( reduceValuesToInt( ->( (universal) forEachKey( putAll( reduceValuesToLong( ==( (universal) forEachValue( putIfAbsent( remove( asInstanceOf (universal) formatted( (universal) reduce( replace( clear() get( reduceEntries( replaceAll( compute( getClass() (universal) reduceEntriesToDouble( search( computeIfAbsent( getOrDefault( reduceEntriesToInt( searchEntries( computeIfPresent( hashCode() reduceEntriesToLong( searchKeys( contains( isEmpty() reduceKeys( searchValues( containsKey( isInstanceOf (universal) reduceKeysToDouble( size() containsValue( keySet() reduceKeysToInt( synchronized( (universal) elements() keys() reduceKeysToLong( toString() ensuring( (universal) mappingCount() reduceToDouble( values() entrySet() merge( reduceToInt( wait( (universal) eq( (universal) ne( (universal) reduceToLong( →(
重要方法
isEmpty
clear
get(“key”)
contains(“key”)
scala> m.contains("a")
val res18: Boolean = false
scala> m.contains("b")
val res19: Boolean = false
remove
compute
private val REGISTERED_PRODUCER_INSTANCES = new ConcurrentHashMap[String, KafkaProducer[String, String]]()
def registerProducer(topic: String, properties: Properties): Unit = {
REGISTERED_PRODUCER_INSTANCES.computeIfAbsent(topic, new java.util.function.Function[String, KafkaProducer[String, String]]() {
override def apply(t: String): KafkaProducer[String, String] = {
val client = new KafkaProducer[String, String](properties)
logger.info(s"finish init kafka producer client: topic = $topic, properties = $properties")
client
}
})
}
computeIfAbsent
computeIfPresent
put(k,v)
scala> m.put("a","a-v")
val res6: String = null
scala> m.put("a","a-v1")
val res7: String = a-v
putIfabsent
- 如果传入key对应的value已经存在,就返回存在的value,不进行替换。如果不存在,就添加key和value,返回null