Signing Android APK using JDK 7

I've stumbled upon really annoying and hard to inspect problem when publishing an APK to Google Play. Building the package went well as usual, uploading to Google Play as well but in the end user wasn't able to download and install the application getting error message: "Package file was not signed correctly.".

That was strange as Google is veryfing packages just after upload - if the package was built in release mode etc. so I would expect Google to show at least some warning. But it acted like everything was in the best order.

The problem was I was using JDK 7! Default digest algorithm for Java 7 is SHA-256 instead of SHA-1 used in JDK 6. As Android APKs have to use SHA-1 to compute checksums for included files, default JDK 7 settings made resulting APK unusable. I think Google should check this in it's post-upload process.

To resolve this issue add the following lines to build.xml forcing the digest algorithm to be SHA-1.

<presetdef name="signjar">
    <signjar digestalg="SHA1" sigalg="MD5withRSA">

Source: http://code.google.com/p/android/issues/detail?id=19567


Git commit message starting with hash (#)

If You need to have a commit message starting with a hash (#), e.g. when using some ticket systems, use --cleanup=whitespace. This turns Git functionality to ignore lines with hash at the beginning off.
git commit --cleanup=whitespace

Alternatively You can use the hash sign when specifying commit message directly from command line.
git commit -m "#525 - ticket resolution"

Be aware! Do not forget to remove all # lines that you don't want to appear in git commit log, e.g. default Git summary describing commited files.


Find all subdomains of a given subdomain with dig

To find out what subdomains some domain has we can use standard DNS lookup utility - dig.
First we need to know what nameserver takes care for given domain. Then we send AXFR query ( http://en.wikipedia.org/wiki/DNS_zone_transfer ) to that nameserver.

# let's dig the server
dig example.com
# from the DNS answer we are interested in the authority section
#example.com.  79275 IN NS a.iana-servers.net.
#example.com.  79275 IN NS b.iana-servers.net.
# now we find out all subdomains
dig @a.iana-servers.net example.com axfr
# in this example we get "Transfer failed." but some NS could return something like
#dev.example.com. 1800 IN A
#dev2.example.com. 1800 IN A

Note that some DNS servers don't answer to AXFR queris and return "; Transfer failed".


Managing large amount of SSH keys

Having larger amount of SSH keys can cause a problem when you connect just via ssh hostname. This is because of MaxAuthTries configuration in /etc/ssh/sshd_config and the fact that your client is trying to authenticate with all possible keys stored in ~/.ssh.

When server you want to connect to receives more than MaxAuthTries (default = 6), it falls back to regular password authentication. To prevent this, you can specify which key to use for what server in two ways.

As cmd option:
ssh -i ~/.ssh/id_rsa user@hostname

By specifying Host/IdentityFile pair in ~/.ssh/config:
Host hostname
IdentityFile /home/USER/.ssh/id_rsa

Host hostname2
IdentityFile /home/USER/.ssh/id_rsa2

Of course you can also increase MaxAuthTries value on the SSH server(s) in /etc/ssh/sshd_config, but this is not recommended!

This automatic mechanism is also annoying when using password authentication. To force the ssh client to use password only and not to use keys, use PreferredAuthentications option:

ssh -o PreferredAuthentications=password hostname

(This post is just an extended answer from http://serverfault.com/questions/36291/how-to-recover-from-too-many-authentication-failures-for-user-root/256083#256083 )

Free unused memory - page cache, dentries and inodes cache

Linux (and any other OS) tries to cache disk operations to reduce the load on disk itself. When a file is read from disk for first time it's cached to RAM. Next time the file is requested, it's loaded from the RAM. This is really useful for frequently used files, as access times to RAM are significantly lower than to disk. This buffer system is called page cache.

When operating on large amount of files (10000+) cache size can grow up to hundreds of MB. As page cache priority is lower than a processes memory priority, it is freed from time to time. Or You can free it on your own.

# flush file system buffers (cache > disk)
sudo sync
# free page cache
sudo -s "echo 1 > /proc/sys/vm/drop_caches"
# free inodes and dentries cache
sudo -s "echo 2 > /proc/sys/vm/drop_caches"
# free page cache, inodes and dentries cache
sudo -s "echo 3 > /proc/sys/vm/drop_caches"

Note the sync command. Page cache also caches write operations. When you save a file that's already caches, content is saved into the cache and real disk write operation happens later in some batch. Such cached object/page is called dirty and can't be freed. To force the disk write of all dirty pages and make them freeable, use sync before.


Save cron output to file with a timestamp in its filename

It's simple but you have to escape all % in date's parameter.

Run "crontab -e" to edit your cron jobs.

# do not forget to escape % to \% in date's parameter
* * * * * /usr/bin/php /path/script.php > /var/log/output-`date +\%Y\%m\%d\%H\%M`.log 2>&1

This will create "output-201110170929.log" in /var/log. If you want to change the filename format, see "man date".


Load jQuery via bookmarklet

Bookmarklet is JavaScript code stored in a bookmark to be launched by simple click.
It can be used to e.g. load any external script on any webpage! Just make a bookmark a fill its contents with: javascript:CODE;

To create bookmarklet that loads latest jQuery which you can use on any page from browser console (Chrome Developer Tools, Firefox's Firebug), just put the following code into the bookmark's URL:


Or just drag this link to your bookmarks: Load jQuery


How to reset MySQL root password

Just for the case you can't remember.

sudo /etc/init.d/mysql stop

# we have to --skip-networking to prevent connections out of localhost because mysql will now run absolutely unprotected
sudo /usr/sbin/mysqld --skip-grant-tables --skip-networking &
mysql -u root

# reload privileges from grant tables (free cached memory)
USE mysql;

# set your new password
UPDATE user SET Password = PASSWORD('new password') WHERE Host = 'localhost' AND User = 'root';

# reload the privileges again

# restart
sudo /etc/init.d/mysql stop
sudo /etc/init.d/mysql start


Encrypt your data on Dropbox with EncFS

"EncFS is a Free (GPL) FUSE-based cryptographic filesystem that transparently encrypts files, using an arbitrary directory as storage for the encrypted files."

+ easy setup
+ isolated corruption of data as individual files are encrypted not the whole directory tree
- files information (how many files, their sizes and permissions) is "public"

# install encfs and fuse
sudo apt-get install fuse-utils encfs
# add fuse module to Linux Kernel
sudo modprobe fuse
# add fuse to end of /etc/modules to autoload fuse module
sudo nano /etc/modules
# add your self to fuse group
sudo adduser <user> fuse
# paths have to be absolute, alternatively ~ can be used
encfs ~/Dropbox/.encrypted/ ~/encfs/
# configure cipher algorithm, block size, filename encoding etc.
# or you can go with default configuration just by pressing [Enter]
# unmount
fusermount -u ~/encfs
# to mount again
encfs ~/Dropbox/.encrypted/ ~/encfs/