Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin


Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /usr/bin/ruby1.9.1 extconf.rb
checking for Ruby version >= 1.8.5... yes
checking for gcc... yes
checking for Magick-config... no
Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:

Gem files will remain installed in /var/lib/gems/1.9.1/gems/rmagick-2.13.2 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/rmagick-2.13.2/ext/RMagick/gem_make.out

An error occurred while installing rmagick (2.13.2), and Bundler cannot


Install ImageMagick


sudo apt-get install imagemagick libmagickcore-dev libmagickwand-dev

Error installing json Failed to build gem native extension.

jagat@nanak-P570WM:/var/www/redmine-2.4.2$ gem install json
Fetching: json-1.8.1.gem (100%)
ERROR:  While executing gem ... (Gem::FilePermissionError)
    You don't have write permissions into the /var/lib/gems/1.9.1 directory.
jagat@nanak-P570WM:/var/www/redmine-2.4.2$ sudo gem install json
Building native extensions.  This could take a while...
ERROR:  Error installing json:
    ERROR: Failed to build gem native extension.

        /usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
    from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
    from extconf.rb:1:in `<main>'

Gem files will remain installed in /var/lib/gems/1.9.1/gems/json-1.8.1 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/json-1.8.1/ext/json/ext/generator/gem_make.out


Install ruby-dev package also


sudo apt-get install ruby-dev

ggdal and ggmap R packages install

To install following Ggdal and ggmap R packages we need additional dependencies

I spent lot of time searching. So dumping all here

Versions and binary names below are for Redhat , please see corresponding names If you are on Debian.

geos-devel                   3.3.2-1.el6                    
geos                         3.3.2-1.el6
gdal                        1.7.3-15.el6
gdal-devel           1.7.3-15.el6
proj-devel           4.7.0-1.el6
proj-epsg        4.7.0-1.el6
proj-nad    4.7.0-1.el6
libpng                    2:1.2.49-1.el6_2
libpng-devel 2:1.2.49-1.el6_2

Dumping some of the related errors

Error: proj/epsg not found



read.c:3:17: error: png.h: No such file or directory
configure: error: proj_api.h not found in standard or given locations.


configure: error: proj_api.h not found in standard or given locations.
ERROR: configuration failed for package ‘rgdal’

configure: error: proj_api.h not found in standard or given locations.

Error: gdal-config not found
gdal-config is in your path. Try typing gdal-config

Install gdal-devel


How to help Kernel developers to help you

I have been bugged with lots of errors in Kernel errors in syslog lately with my new machine.

This machine Clevo is not linux supported offically by Clevo so i cannot ask them manufacturer.

I found this interesting link on how to find which function is throwing error and how to contact maintainer of that Kernel code

Download Kernel code

git clone git:// linux-git 

Depending on error you get ,try to search the code and find the package from where error statement is being thrown.

After that use Ubuntu bug reporting tool to file the bug or ask question.

Install Hue from tar ball or source

To install Hue from tar file or source follow the following steps

Get the code

Download the tar ball from Git Hub or Cloudera website


Extract it to some location

If you are using source then clone the code into some location

Development tools

You need to install following additional packages to build from source

I am using Ubuntu so i did by

$ sudo apt-get install ant gcc g++ libkrb5-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev  libldap2-dev python-dev python-simplejson python-setuptools

Build the package

$ git clone
$ cd hue
$ make apps
$ build/env/bin/supervisor 
This will start Hue server on default port
You can configure various properties of hue
Go to 
/desktop/conf/ file and edit it. 

Update Hive Metastore

Update Hive Metastore

Hive 0.12 includes the Schema update tool

$ bin/schematool -help
usage: schemaTool
-dbType <databaseType>             Metastore database type
-dryRun                            list SQL scripts (no execute)
-help                              print this message
-info                              Show config and schema details
-initSchema                        Schema initialization
-initSchemaTo <initTo>             Schema initialization to a version
-passWord <password>               Override config file password
-upgradeSchema                     Schema upgrade
-upgradeSchemaFrom <upgradeFrom>   Schema upgrade from a version
-userName <user>                   Override config file user name
-verbose                           only print SQL statements

Depending on what metastore you are using

dbType can be

mysql, postgres, derby or oracle

Update MySQL Hive metastore from 0.10.0 to 0.12.0
For example

bin/schematool -dbType mysql -upgradeSchemaFrom 0.10.0

You can also do a dry run with passing option -dryRun

Hive schematool to create metastore

Hive 0.12 has a new tool called schematool
This tool is used for two purpose
1) Create new metastore database when you install new Hive

2) Upgrade the existing Hive metastore to new version

To create new metastore when you install new hive

Create hive-site.xml

Inside that specify the details for your mysql ( or other ) database

Use the following command to populate that database

$bin/schematool -dbType mysql –initSchema

schematool -dbType mysql –info

You can also see the details 

Impala Failed to load metadata for table


Failed to load metadata for table: <tableName>
CAUSED BY: TableLoadingException: Failed to load metadata for table: <tableName>
CAUSED BY: TTransportException: Broken pipe
CAUSED BY: SocketException: Broken pipe


Execute refresh via the impala shell
Restart the impala service
Execute refreshschema via the ODBC driver

Debug Hive query and startup

Start hive with following command.
bin/hive --hiveconf hive.root.logger=DEBUG,console
The logs of the user queries and metastore can be analysed by
User query logs
Configure the variable
Hive metastore logs
Configure the variable
A very good presentation on how to debug hive errors

Compare contents of two jar files

Download jarcomp jar from below URL

Use the following command to compare

java -jar jarcomp_01.jar file1.jar file2.jar

HBase transfer to another cluster using distcp

We had to take full copy of HBase from one cluster to another.

We decided to take brute force approach of copying via Distcp.
Although it’s not recommended but we took it as time was very less and we knew it works very quick.




This assumes that you don’t have any tables on destination side. If you have then you need to backup them first.

Stop HBase on both sides this will ensure all the data in memory will be dumped to local disks

Start the distcp to copy data

Commands executed on clusterA side

# Create directory on destination side

sudo –u hdfs hadoop fs –mkdir hdfs://clusterB/hbase_copy_20130320

# Start distcp job
sudo –u hdfs hadoop distcp –update  /hbase  hdfs://clusterB/hbase_copy_20130320

Commands executed on destination side

Verify that data size matches on both sides

hadoop fs -du -h /hbase
hadoop fs -du -h /hbase_copy_20130320

Commands executed on destination side clusterB

sudo -u hdfs hadoop fs -chown -R hbase:hbase /hbase_copy_20130320
sudo -u hdfs hadoop fs -mv /hbase /hbase_clusterB_backup
sudo -u hdfs hadoop fs -mv /hbase_clusterB_backup /hbase

Do meta repair

sudo -u hbase hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -base hdfs://clusterB_NamenodeService/hbase

This will take some time , once it’s done.

Restart HBase on destination side and let region balancing happens

you can verify data by

hbase list

Lastly , the recommended approach is snapshots and copytable. But today we did not use these. I will write another post to use snapshots and copytable

Thanks for reading