Update dot files with git

For a years I was using a custom created scripts to keep my dot files updated. I had a local repository in bazaar and a script which check differences between home dot files and files stored in the repository. This solutions works fine for years, but now I want to do some changes…

The first one is moving my dot files to git (and probably pushed them to github), and the second one is to create a hook for git to update my dot files. I known that there are a lot of similar solutions, one more complex, other more easy, but this is mine πŸ™‚

So, I created a post-commit hook script for git, which perform the modifications that I need. Now I just only do this steps:

1. Create a new git repo:

mkdir mydots_repo
cd my_dots_repo && git init

2. Put the hook:

wget -O .git/hooks/pre-commit  http://2tu.us/2scm
chmod 755 .git/hooks/pre-commit

Or just put this content to pre-commit hook:

#! /bin/bash
# (c) 2010 Andres J. Diaz <ajdiaz@connectical.com>
# A hook to git-commit(1) to update the home dot files link using this
# repository as based.
#
# To enable this hook, rename this file to "post-commit".

for dot in $PWD/*; do
 home_dot="$HOME/.${dot##*/}"

 if [ -L "${home_dot}" ]; then
 if [ "${home_dot}" -ef "$dot" ]; then
 echo "[skip] ${home_dot}: is already updated"
 else
 rm -f "${home_dot}" && \
 ln -s "$dot" "${home_dot}" && \
 echo "[done] updated link: ${home_dot}"
 fi
 else
 if [ -r "${home_dot}" ]; then
 echo "[keep] ${home_dot}: is regular file"
 else
 ln -s "$dot" "${home_dot}" && \
 echo "[done] updated link: ${home_dot}"
 fi 
 fi
done
true

3. Copy old files:

cp ~/old/bzr_repo/* .
git add *
 

4. Commit and recreate links:

git commit -a -m'initial import'

And it’s works πŸ™‚

New version of dtools

Today I was released a new version of dtools. Distributed tools, aka dtools is a project written in bash coding to create a suite of programs to allow running different UNIX comamnds parallelly in a list of tagged hosts.

Features

  • Fully written in bash, no third party software required (except ssh, obviously).
  • Based in module architecture, easy to extend.
  • Full integration with ssh.
  • Easy to group hosts by tags or search by regular expression.
  • Manage of ssh hosts
  • Parseable output, but human-readable
  • Thinking in system admin, no special development skills required to extend the software.

Short Example

$ dt tag:linux ssh date
okay::dt:ssh:myhostlinux1.domain:Mon Nov 16 23:54:04 CET 2009
okay::dt:ssh:myhostlinux3.domain:Mon Nov 16 23:54:04 CET 2009
okay::dt:ssh:myhostlinux2.domain:Mon Nov 16 23:54:04 CET 2009

As usual, you can download the code from the project page, or if you wish you can download the code via git:

git clone git://git.connectical.com/ajdiaz/dtools

Enjoy!

New htop color scheme

From a couple of weeks I use htop in my work to get a fast view about the system status, htop is an an interactive process viewer for Linux, similar to classic UNIX top, but with some enhancements, for example a more configurable view, the integration with strace and lsof programs and much more.

But (and it’s a big “but” for me) I really dislike the color scheme that use by default. htop comming with five color schemes, but I cannot find any beautifull one (from my personal point of view, of course), so I decided to make a new schema. I called “blueweb” theme (dont’ ask) ;). And here is the result:

htop with blueweb theme
htop with blueweb theme

You can download the patch file for the htop source code. And yes, unfortunately you need to patch the code.

Now my htop looks nice πŸ™‚

Enjoy!

Python module to handle runit services

Last month I needed to install runit in some servers to supervise a couple of services. Unfortunately my management interface cannot handle the services anymore, so I decided to write a small module in python to solve this handicap, and that is the result!.

With this module you can handle in python environment a number of runit scripts. I think that this might be work for daemontools too, but I do not test yet. Let’s see an example πŸ˜€

>>> import supervise
>>> c = supervise.Service("/var/service/httpd")
>>> print s.status()
{'action': None, 'status': 0, 'uptime': 300L, 'pid': None}
>>> if s.status()['status'] == supervise.STATUS_DOWN: print "service down"
service down
>>> s.start()
>>> if s.status()['status'] == supervise.STATUS_UP: print "service up"
service up

Personally I use this module with rpyc library to manage remotely the services running in a host, but it too easy making a web interface, for example using bottle:

import supervise
import simplejson
from bottle import route, run

@route('/service/status/:name')
def service_status(name):
   """ Return a json with service status """
   return simplejson.dumps( supervise.Service("/var/service/" +
name).status() )

@route('/service/up/:name')
def service_up(name):
    """ Start the service and return OK """
    c = supervise.Service("/var/service/" + name)
    c.start()
    return "OK UP"

@route('/service/down/:name')
def service_down(name):
    """ Stop the service and return OK """
    c = supervise.Service("/var/service/" + name)
    c.down()
    return "OK DOWN"

from bottle import PasteServer
run(server=PasteServer)

Now you can stop your service just only point your browser http://localhost/service/down/httpd (to down http service in this case).

Enjoy!

Distributed tools

For last months I needed to maintain a number of heterogeneous servers for mi work, I need to do some usually actions, like update a config file, restart a service, create local users etc.

For this purposes there are a lot of applications, like dsh (or full csm), pysh, shmux and many others (only need to perform a search in google using phrase “distributed shell”). Unfortunately for me, I want a easy-to-parse solution, because I’ve a big (really big) number of servers, and I want a single cut-based/awk parsing, and also I need to do some actions as other users (like root, for example) via sudo. Althought many of the existants solutions offers me a subset of this features, I cannot found a complete solution. So I decided to create one πŸ˜€

You can find the code, and some packages in the dtools development site. I was use this solution in production environment from months with excelent results, and you can feel free to use.

Of course, its free (of freedom) software, distributed under MIT license.

Enjoy and remember: feedback are welcome πŸ˜‰

tcptraceroute

tcptraceroute was another friend of the network administrator. Probably you known classical traceroute, which use the TTL field in IP header to determinate the hops in the route to a specific destination. In each hop the TTL value is decreasing (according to internet protocol), and when TTL is equal to cero, a ICMP is returned to sender IP. So, the classical traceroute technique, send a UDP packet with TTL field setted to 1, and get the IP address of the first hop from returned ICMP, and likewise for other hops.

Unfortunately, today many host are firewalled and ICMPs are blocking. The classical traceroute design fails, and we only obtain a list of useless β€œ*”. The tcptraceroute use TCP packets instead of UDP packets, and try to connect to usual port enabling the SYN flag. If port is closed, a RST flag is returned, and if port is open then return an ACK flag. So we don’t need ICMPs anymore.

ssh-keysend: a tool to distribute ssh keys

Update: This project is deprecated by dtools project.

ssh-keysend is a tiny script written in bash which read a number of ssh public keys from a file (according to search pattern) and send these keys to remote hosts (taken from another file, also filtered by specified pattern). The remote host add these keys into authorized_keys file for specified user. Here are an example of use:

$ ssh-keysend bill@gates 10.1.10.*

This example send the key for user bill@gates and send the key to any known_host which match with the pattern 10.1.10.* (yes, it’s a regexp). The key is taken from *.pub files in ~/.ssh/ directory.

You can get the code from launchpad ssh-keysend project page, or get the repository code with bzr:

$ bzr get lp:ssh-keysend

ssh socket

When work using a β€œcheap” wireless network (yes, I still have kind neighbours), each new ssh connection takes a lot of time, because a new authentication is required from the peer, but what happens if I already connected? In theory no new re-authentication is necessary, you can use existing socket to send data over the same channel of the previous connection. To enable socket manager, put the following lines in your ~/.ssh/config file:

Host *
  ControlMaster auto
  ControlPath ~/.ssh/socket-%r@%h:%p

Some situations may freeze your ssh connection, for example when the network goes down before close connection and timeout is reached, in this case the socket will also be frozen, and new connections to the same destination are no possible. Only need to remove the socket file in ~/.ssh/ directory and kill the previous session.