HBase 1.4 With New Shaded Client |
Written by Kay Ewbank |
Tuesday, 02 January 2018 |
Apache has released an updated version of HBase with a new shaded client intended to improve compatibility, along with improvements to the REST client, enhanced autorestart capabilities, and improvements to RegionServer metrics. Apache HBase is Hadoop's open-source, distributed, versioned, non-relational database, modeled after Google's BigTable, which offers random, realtime read/write access to big data. Apache's goal for this project is for it to host very large tables -- billions of rows X millions of columns -- on top clusters of commodity hardware. This is the first release in the new HBase 1.4 line, continuing on the theme of earlier 1.x releases of bringing a stable, reliable database to the Apache Big Data ecosystem. The new shaded client no longer contains a number of non-relocated third party dependency classes that were mistakenly included. While this makes the client more generally compatible, it does mean that if an app relies on the classes being present, iy will need to add a runtime dependency onto an appropriate third party artifact. The earlier shaded client packaged several third party libs without relocating them. In some cases these libraries have now been relocated; in some cases they are no longer included at all. The list includes: * jaxb * jetty * jersey * codahale metrics (HBase 1.4+ only) * commons-crypto * jets3t * junit * curator (HBase 1.4+) * netty 3 (HBase 1.1) * mokito-junit4 (HBase 1.1) The practice of shading dependencies involves including and renaming dependencies to create a private copy that is bundled alongside the main package, HBase in this case. The REST client has also been improved to add support for binary row keys. The RemoteHTable now supports binary row keys with any character or byte by properly encoding request URLs. The developers say that this is a both a behavioral change from earlier versions and an important fix for protocol correctness. Region metrics have been improved in two ways. Firstly, there's a much faster locality cost function and candidate generator that uses caching and incremental computation. This allows the stochastic load balancer to consider around twenty times more cluster configurations for big clusters to identify the most cost effective. The second improvement is a new RegionServer metric that counts in all row actions and gives a value that equals the sum of the read Request Count and Write Request Count. The counts have also been improved to not overcount multiple requests. This results in more accurate monitoring of server loads. More InformationRelated ArticlesFirst Hybrid Open-Source RDBMS Powered By Hadoop and Spark To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info |