sábado, 7 de enero de 2017

061 en Galicia, teléfono equivalente

Sigo sen entender o tema de non dar os teléfonos equivalentes dos números especiais en calquera servicio, así a xente ou a empresa que presta o servicio están suxeitas ós cargos que fagan as operadoras, mentres que se se dan os números equivalentes as chamadas pasan a estar incluídas dentro das tarifas que temos contratadas cos nosos operadores, cun coste normalmente inferior ou gratis.

Un exemplo deste tipo é o teléfono 061, que en Galicia o xestiona a Fundación Pública Urxencias Sanitarias de Galicia que na sua web espefica un formulario de contacto (se non tes presa) e os teléfonos 061 e 902 400 116 como forma de contacto se temos presa, foi precisamente este último teléfono o que me levou á entrada de Danae en no más 900 onde se especifica o 981953400 como número equivalente ó 061 en Galicia, o probei e funciona perfectamente.

En fin... esperemos que pouco a pouco os organismos e empresas que utilizan todavía os 901/2 e outros números especiais vaian dando os equivalentes para poder cortar co negocio este dos números especiais

lunes, 4 de mayo de 2015

ScreenLock on Jessie's systemd

Something I was used to and which came as standard on wheezy if you installed acpi-support was screen locking when you where suspending, hibernating, ...

This is something that I still haven't found on Jessie and which somebody had point me to solve via /lib/systemd/system-sleep/whatever hacking, but that didn't seem quite right, so I gave it a look again and this time I was able to add some config files at /etc/systemd and then a script which does what acpi-support used to do before

Edit: Michael Biebl has sugested on my google+ post that this is an ugly hack and that one shouldn't use this solution and instead what we should use are solutions with direct support for logind like desktops with built in support or xss-lock, the reasons for this being ugly are pointed at this bug

Edit (2): I've just done the recommended thing for LXDE but it should be similar for any other desktop or window manager lacking logind integration, you just need to apt-get install xss-lock and then add @xss-lock -- xscreensaver-command --lock to .config/lxsession/LXDE/autostart or do it through lxsession-default-apps on the autostart tab. Oh, btw, you don't need acpid or the acpi-support* packages with this setup, so you can remove them safely and avoid weird things.

The main thing here is this little config file: /etc/systemd/system/screenlock.service

[Unit] Description=Lock X session Before=sleep.target [Service] Type=oneshot ExecStart=/usr/local/sbin/screenlock.sh [Install] WantedBy=sleep.target

This config file is activated by running: systemctl enable screenlock

As you can see that config file calls /usr/local/sbin/screenlock.sh which is this little script:

#!/bin/sh # This depends on acpi-support being installed # and on /etc/systemd/system/screenlock.service # which is enabled with: systemctl enable screenlock test -f /usr/share/acpi-support/state-funcs || exit 0 . /etc/default/acpi-support . /usr/share/acpi-support/power-funcs if [ x$LOCK_SCREEN = xtrue ]; then . /usr/share/acpi-support/screenblank fi

The script of course needs execution permissions. I tend to combine this with my power button making the machine hibernate, which was also easier to do before and which is now done at /etc/systemd/logind.conf (doesn't the name already tell you?) where you have to set: HandlePowerKey=hibernate

And that's all.

miércoles, 15 de abril de 2015

Hello Debian Planet and Jessie's question

This was just meant to say hello to the Debian Planet readers, but I'll end it with a Jessie related question, so...


For those who don't know me, I was born in Betanzos, A Coruña, Galicia, in the North-West of Spain and I currently live on A Coruña. I've been a Debian developer since year 2000 when I was quite more involved than currently (live changes), but I'm always expecting to be able to dedicate more time to the project, I hope this will happen when my two children grow up a little bit.

I had been wanting to send my blog's Debian related posts to the planet but always failed to do so, yesterday I found the planet wiki page and I said... it's so easy that I don't have any excuse not to do it, so here I am.

Oh, BTW... if I ever comment on Debian's anniversary (16th of August) that at Betanzos we are launching a really huge paper balloon, it is not to commemorate Debian's date but in honour of San Roque, even though maybe we should talk to the Pita family to have Debian's logo on it for our 25th anniversary :-)

Jessie's question

In Jessie we no longer have update-notifier-common which had the /etc/kernel/postinst.d/update-notifier script that allowed us to automatically reboot on a kernel update, I have apt-file searched for something similar but I haven't found it, so... who is now responsible of echoing to /var/run/reboot-required.pkgs on a kernel upgrade so that the system reboots itself if we have configured unattended-upgrades to do so?

I really miss this stuff, I don't know if it should be on the kernel, on unattended-upgrades or where, but now that we have whatmaps... we need this feature to round it all.


Well, to finish I just want to say that I'm very happy to be a part of the Debian community and that I enjoy reading you guys on the planet. Thanks a lot to all the Debian folks for making Debian not only a great OS, but also a great community.

domingo, 22 de marzo de 2015

Hard asterisk times (or how sip made me unhappy till I fixed this peer definition)

It must be that I no longer touch asterisk like I used to do that it took me a while to write a type=peer section this last days.

At my first attempt copying the entry from an old peer definition I was getting a "404 Not Found" message with the asterisk server not even trying to authenticate to the peer when I was trying to make a phone call. I thought it was a problem with my asterisk not wanting to authenticate but it wasn't. What should had happen is that the asterisk should have gotten a "401 Unauthorized" message, and then it would try to authenticate.

After reading a lot I came to the solution by myself comparing asterisk messages with a csipsimple client that I had running, my asterisk was saying something like...

From: "Anonymous" ;tag=xxxx

While the csipsimple client has the server (IP or hostname) specified instead of anonymous.invalid. This first problem was solved with a fromdomain=whateveryourpeerexpects line on the peer definition.

So, I then got the 401 message and the asterisk was trying to authenticate, but this server was expeting to have an authuser on the registration (what is also called the digest username, ...) and even though on asterisk sip.conf doc there are examples on how to use authuser on "register" commands there is none explaining how to do that on a peer definition, I got to a lot of doc explaining how to do that with ways that people was saying that weren't working, patches for asterisk, ... I tested and tested and nothing worked.

I was almost going to go to bed again without fixing this (and it is about time 2AM already) when I started to test things by myself and found that it was defaultuser where I shoud specify this authuser, in fact I had tested this already but it was when I was having the 404 error, so it wouldn't work until I fixed that.

If you read the sip.conf doc, you'll find that:
;defaultuser=yourusername ; Authentication user for outbound proxies
which is quite clear and it's why I had tested it at first, but getting the 404 message made it not work, so in the end my peer looks like this:
[thepeer] type=peer host=thehostip nat=no disallow=all allow=ulaw allow=alaw fromdomain=thehostip defaultuser=theauthuser fromuser=theotheruser secret=thepassword :-)

domingo, 28 de diciembre de 2014

haproxy as a very very overloaded sslh

After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client's IP as the source IP of the packages).

With this two little features, which are available on haproxy 1.5 (Jessie's version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.

Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.

There is however one caveat that I don't like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we'll be taking some risks here which I personally don't like, to me it isn't worth it.

Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we'll have to add to it a routing and iptables setup, I'll describe here the whole setup

Here is what you need to define on /etc/haproxy/haproxy.cfg:

frontend ft_ssl bind mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } acl sslvpn req_ssl_sni -i vpn.example.net use_backend bk_sslvpn if sslvpn use_backend bk_web if { req_ssl_sni -m found } default_backend bk_ssh backend bk_sslvpn mode tcp source usesrc clientip server srvvpn vpnserver:1194 backend bk_web mode tcp source usesrc clientip server srvhttps webserver:443 backend bk_ssh mode tcp source usesrc clientip server srvssh sshserver:22

An example of a transparent setup can be found here but lacks some details, for example, if you need to redirect the traffic to the local haproxy you'll want to use the xt_TPROXY, there is a better doc for that at squid's wiki. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won't need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We'll have to get those packets back to haproxy, for that what we'll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables' mangle table with rules marking on PREROUTING but that won't work out if you are having all the setup (frontend and backends) in just one box, instead you'll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:

*mangle :PREROUTING ACCEPT :INPUT ACCEPT :FORWARD ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT :DIVERT - -A OUTPUT -s public_ip -p tcp --sport 22 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 443 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 1194 -o public_iface -j DIVERT -A DIVERT -j MARK --set-mark 1 -A DIVERT -j ACCEPT COMMIT

Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you'll have to mark the packages on the OUTPUT chain.

Once we have the packets marked we just need to route them, something like this will work out perfectly:

ip rule add fwmark 1 lookup 100 ip route add local dev lo table 100

And that's all for this crazy setup. Of course, if, like me, you don't like the root implication of the transparent setup, you can remove the "source usesrc clientip" lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you'll be able to run haproxy with dropped privileges and you'll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.

Hope you like the article, btw, I'd like to point out the main difference of this setup vs sslh, it is that I'm only sending the packages to the ssl providers if the client is sending SNI info, otherwise I'm sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.

jueves, 11 de diciembre de 2014

Add a Debian source repository to your ubuntu

Debian does all the work that Ubuntu bases its system on, so... if you want to add our latest Debian source package repository to ubuntu, so that you can compile and use it under Ubuntu, that should be as easy as to add to your /etc/apt/sources.list file:

deb-src http://ftp.debian.org/debian/ testing main deb-src http://ftp.debian.org/debian/ unstable main deb-src http://ftp.debian.org/debian/ ../project/experimental main

Well, just the line you want, if you want the future version of Debian (currently Jessie) use the testing one, if you want current develpment, use unstable, and sometimes you even get a experimental version if you want to test really bleeding edge versions.

The problem here is that after you add the lines you want to your sources.list file and you run your

apt-get update

You will end with a GPG error because the ubuntu's apt-key keyring doesn't know about Debian's keys, so... we'll have to run a few commands to get rid of this, but first we must locate the needed key for example here

wget https://ftp-master.debian.org/keys/archive-key-7.0.asc -O -|apt-key add - apt-get update

Now hopefully your Ubuntu will recognize your Debian sources and you'll be able to get your Debian favourite source into your good old Ubuntu by doing something like this as user (we'll use fakeroot):

apt-get install fakeroot apt-get apt-get build-dep your_favourite_package apt-get source your_favourite_package cd your_favourite_package_source_dir dpkg-buildpackage -rfakeroot

Squid proxy being transparent also for ssl and other tcp connections by using ssl bump

A long time ago I was trying to have a transparent proxy setup by using squid, but squid traditionally only knows about http, ftp and https in explicit proxy mode. There is no way to handle non http (for example https) transparently on a traditional setup, so that setup was not what I was looking for.

After looking into TLS SNI and other things, trying even to implement things like that on some tools like socat that would proxify things for squid, I discovered ssl bump on squid3, which just does all the magic I was looking for.

Squid traditionally has several ways of listening to requests, one of which is the explicit proxy port (http_port 3128) and the other typical ones are for transparent proxy, this is indicated by flags:

  • transparent: used to intercept server queries and parse http host headers to forward them through squid to the servers
  • tproxy: used to spoof outgoing address to that of the client, so that squid is really transparent

Well, none of this allows us to forward https or tcp requests (say for example ssh, imap, ...) for a client that doesn't have explicit proxy support. Unluckily this means that a transparent proxy using this technology nowadays is of no use.

This is where ssl bump comes to the rescue, the old transparent mode of squid, which is currently called intercept, on squid3 has an extra flag: ssl-bump, which has the power of being able to intercept ssl traffic and things like that, allowing squid to cache https webs, but to do this, one has to create a Certificate Authority and the clients must trust this CA that squid uses to issue certificates for the web sites we want to visit.

However ssl-bump can work without issuing these certificates and in this case squid won't mess with https requests, but it will still allow us to do a pretty neat thing, which is to forward all tcp connections from any client (doesn't even have to know what a proxy is) transparently. In this case what squid does is to ask netfilter (iptables) where was the connection that squid is handling supposed to go, and squid makes this connection for the client so that it starts talking to the other end with all the traffic going through squid.

One might ask himself why would he want this traffic on squid, well, you'll have all of squid features, you can control the speed, you get all the typical logs and acls, ...

Of course that if you don't want this you can go with iptables and traffic shaper and that's good as well.

So... you like this idea? Well, then if your distro has squid compiled with ssl support you can read the config section, but if you are (like me) using Debian, you must recompile your squid3 with ssl support. Debian doesn't compile squid3 with ssl support as there are problems between openssl license and squid3 one (squid developers are looking forward to somebody porting the code to gnutls :-)

Rebuilding squid3 with ssl-bump

Well, to rebuild the squid3 package with ssl support you must install the needed packages:

apt-get install fakeroot libssl-dev apt-get build-dep squid3

There you may find that your distro doesn't have all the packages needed to compile, like for example libecap2-dev, in this case you'll have to apt-get source these packages, compile and install them like we'll do with squid3

And then do a few things as user (we'll use fakeroot which we have just installed) I've tested this using squid3 from Debian testing (the next version, which will be Jessie)

Start with:

apt-get source squid3

And then we'll edit a couple of files on the source, so cd into the source dir and in the debian/control file you must add to the build-depends: libssl-dev and in the debian/rules file you must add this configure options:

--enable-ssl \ --enable-ssl-crtd \

One can then run debchange -i and add something like this on the changelog:

Build with --enable-ssl and --enable-ssl-crtd.

Now the source is ready to build using your favourite command, like for example:

dpkg-buildpackage -rfakeroot

and at last install the needed packages using dpkg -i


The main configuration file is stored at /etc/squid3/squid.conf, even though it can be split into separate files. On these files we must set as SSL_ports all the ports for the protocols that we want to allow through squid using an acl like this:

acl SSL_ports port 1935 # rtmp acl SSL_ports port 5222 # xmpp acl SSL_ports port 5223 # xmpp over ssl acl SSL_ports port 5228 # googletalk acl SSL_ports port 5242 # viber acl SSL_ports port 4244 # viber

And we'd typically want to define several proxy ports, one for explicit http, another one for http interception (classic transparent proxy) and our ssl-bump port, like this:

http_port 3128 http_port 80 intercept https_port 3127 intercept ssl-bump generate-host-certificates=off cert=/etc/squid3/squid.pem acl ssl-bump_port myportname 3127 always_direct allow ssl-bump_port

The direct thing is to force the squid server to send ssl-bump requests directly and not through other caches, as this wouldn't work at all. The certificate is needed even though we won't be using it, just generate one using make-ssl-cert from the ssl-cert package or plain openssl x509 power.

Make sure we have all our client networks listed on our localnet acl and that they are allowed to use the proxy:

acl localnet src http_access allow localnet

Several misc settings like: make sure that ssl-bump doesn't generate certificates for any host at all (just in case), set the language for the errors and allow a good number of filedescriptors so that we don't run out of them

ssl_bump none all error_default_language es max_filedescriptors 8192

And that is pretty much what is needed on the squid configuration, of course you can do this and more writing it all in different ways. I have found out that the directory for the ssl certs is not created by default, we must run: /usr/lib/squid3/ssl_crtd -c -s /var/lib/ssl_db

Routing it all through squid

You may be wondering how does all the traffic that we are going to allow through squid get to it. Well, the answer is easy if you have ever configured some kind of transparent proxy or similar, we do it through iptables, in the PREROUTING chain of the nat table we send things to the ssl-bump port or to our transparent proxy port as we like and then we open the ports on which we are serving all this on the INPUT chain of the filter table with something like this which can be loaded with iptables-restore myiptables.cfg:

*nat :PREROUTING ACCEPT :POSTROUTING ACCEPT :OUTPUT ACCEPT -A PREROUTING -i eth0 -s ! -d -p tcp --dport 1935 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 3389 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 5222 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 5223 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 5228 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 5242 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 4244 -j REDIRECT --to-ports 3127 -A PREROUTING -i eth0 -s ! -d -p tcp --dport 80 -j REDIRECT --to-ports 80 #This is not for squid but I like these too (local ntp and dns cache) -A PREROUTING -i eth0 -s ! -d -p udp --dport ntp -j REDIRECT --to-ports 123 -A PREROUTING -i eth0 -s ! -d -p tcp --dport domain -j REDIRECT --to-port 53 -A PREROUTING -i eth0 -s ! -d -p udp --dport domain -j REDIRECT --to-port 53 COMMIT *filter :INPUT DROP :FORWARD DROP :OUTPUT ACCEPT -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A INPUT -i eth0 -p tcp --dport http -j ACCEPT -A INPUT -i eth0 -p tcp --dport 3127:3128 -j ACCEPT -A INPUT -i eth0 -p udp --dport bootps -j ACCEPT -A INPUT -i eth0 -p udp --dport ntp -j ACCEPT -A INPUT -i eth0 -p udp --dport domain -j ACCEPT -A INPUT -i eth0 -p tcp --dport domain -j ACCEPT COMMIT

I believe that's all, of course this is generally speaking and you'll have to adapt it to fit your needs, but there it is. Just add a dhcp server for convenience and ntp and dns servers so that you don't need to forward those protocols and in order to save more bandwith using the dns server cache and local ntp answers and you're done.

Note that these iptables are dropping all forwarding as in the example the kernel doesn't need to forward anything, squid does it, and for dns and ntp we are using local servers. Of course that if you want to forward some udp traffic, you'll need to add forwarding rules for that.