tag:blogger.com,1999:blog-76927473698420583782024-03-14T04:38:31.002+01:00manty's blogSantiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-7692747369842058378.post-35399687697049251942022-03-12T00:20:00.000+01:002022-03-12T00:20:04.665+01:00tcpping-nmap a substitute for tcpping based on nmap<p>I was about to setup a tcpping based monitoring on smokeping but then I discovered this was based on tcptraceroute which on Debian comes setuid root and the alternative is to use sudo, so, anyway you put it... this runs with root privileges.</p><p>I didn't like what I saw, so, I said... couldn't we do this with nmap without needing root?</p><p>And so I started to write a little script that could mimic what tcpping and tcptraceroute were outputing but using nmap.</p><p>The result is <a href="https://github.com/mantinan/tcpping-nmap">tcpping-nmap</a> which does this. The only little thing is that nmap only outputs miliseconds while the tcpping gets to microseconds.</p><p>Hope you enjoy it :-)<br /></p>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0tag:blogger.com,1999:blog-7692747369842058378.post-21346899741071608652021-05-03T00:23:00.000+02:002021-05-03T00:51:41.570+02:00Windows and Linux software Raid dual boot BIOS machine<p>One could think that nowadays having a machine with software raid doing dual boot should be easy, but... my experience showed that it is not that easy.</p><p>Having a Windows machine do software raid is easy (I still don't understand why it doesn't really work like it should, but that is because I'm used to Linux software raid), and having software raid on Linux is also really easy. But doing so on a BIOS booted machine, on mbr disks (as Windows doesn't allow GPT on BIOS) is quite a pain.</p><p>The problem is how Windows does all this, with it's dynamic disks. What happens with this is that you get from a partitioning like this:</p><p><listing>/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT
/dev/sda2 206848 312580095 312373248 149G 7 HPFS/NTFS/exFAT
/dev/sda3 312580096 313165823 585728 286M 83 Linux
/dev/sda4 313165824 957698047 644532224 307,3G fd Linux raid autodetect
</listing></p><p>To something like this:</p><p><listing>/dev/sda1 63 2047 1985 992,5K 42 SFS
/dev/sda2 * 2048 206847 204800 100M 42 SFS
/dev/sda3 206848 312580095 312373248 149G 42 SFS
/dev/sda4 312580096 976769006 664188911 316,7G 42 SFS
</listing></p><p>These are the physical partitions as seen by fdisk, logical partitions are still like before, of course, so there is no problem in accesing them under Linux or windows, but what happens here is that Windows is using the first sectors for its dynamic disks stuff, so... you cannot use those to write grub info there :-(</p><p>So... the solution I found here was to install Debian's mbr and make it boot grub, but then... where do I store grub's info?, well, to do this I'm using a btrfs /boot which is on partition 3, as btrfs has room for embedding grub's info, and I setup the software raid with ext4 on partition 4, like you can see on my first partition dump. Of course, you can have just btrfs with its own software raid, then you don't need the fourth partition or anything.</p><p>There are however some caveats on doing all this, what I found was that I had to install grub manually using grub-install --no-floppy on /dev/sda3 and /dev/sdb3, as Debian's grub refused to give me the option to install there, also... several warnings came as a result, but things work ok anyway.</p><p>One more warning, I did all this on Buster, but it looks like for Grub 2.04 which is included on Bullseye, things have gotten a bit bigger, so at least on my partitions there was no room for it, so I had to leave the old Buster's grub around for now, if anybody has any ideas on how to solve this... they are welcome.<br /></p>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com1tag:blogger.com,1999:blog-7692747369842058378.post-27185372686621068112015-05-04T17:25:00.000+02:002017-01-02T22:07:52.415+01:00ScreenLock on Jessie's systemd<p>Something I was used to and which came as standard on wheezy if you installed acpi-support was screen locking when you where suspending, hibernating, ...</p>
<p>This is something that I still haven't found on Jessie and which somebody had point me to solve via /lib/systemd/system-sleep/whatever hacking, but that didn't seem quite right, so I gave it a look again and this time I was able to add some config files at /etc/systemd and then a script which does what acpi-support used to do before</p>
<p><b>Edit:</b> Michael Biebl has sugested on my google+ post that this is an ugly hack and that one shouldn't use this solution and instead what we should use are solutions with direct support for logind like desktops with built in support or xss-lock, the reasons for this being ugly are pointed at <a href="https://bugs.debian.org/755888">this bug</a></p>
<p><b>Edit (2):</b> I've just done the recommended thing for LXDE but it should be similar for any other desktop or window manager lacking logind integration, you just need to apt-get install xss-lock and then add @xss-lock -- xscreensaver-command --lock to .config/lxsession/LXDE/autostart or do it through lxsession-default-apps on the autostart tab. Oh, btw, you don't need acpid or the acpi-support* packages with this setup, so you can remove them safely and avoid weird things.</p>
<p>The main thing here is this little config file: <b>/etc/systemd/system/screenlock.service</b></p>
<listing>
[Unit]
Description=Lock X session
Before=sleep.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/screenlock.sh
[Install]
WantedBy=sleep.target
</listing>
<p>This config file is activated by running: <b>systemctl enable screenlock</b></p>
<p>As you can see that config file calls <b>/usr/local/sbin/screenlock.sh</b> which is this little script:</p>
<listing>
#!/bin/sh
# This depends on acpi-support being installed
# and on /etc/systemd/system/screenlock.service
# which is enabled with: systemctl enable screenlock
test -f /usr/share/acpi-support/state-funcs || exit 0
. /etc/default/acpi-support
. /usr/share/acpi-support/power-funcs
if [ x$LOCK_SCREEN = xtrue ]; then
. /usr/share/acpi-support/screenblank
fi
</listing>
<p>The script of course needs execution permissions. I tend to combine this with my power button making the machine hibernate, which was also easier to do before and which is now done at <b>/etc/systemd/logind.conf</b> (doesn't the name already tell you?) where you have to set: <b>HandlePowerKey=hibernate</b></p>
<p>And that's all.</p>
Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com4tag:blogger.com,1999:blog-7692747369842058378.post-26807489595700121642015-04-15T00:15:00.001+02:002015-04-15T00:15:27.770+02:00Hello Debian Planet and Jessie's question<p>This was just meant to say hello to the Debian Planet readers, but I'll end it with a Jessie related question, so...</p>
<h4>Intro</h4>
<p>For those who don't know me, I was born in Betanzos, A Coruña, Galicia, in the North-West of Spain and I currently live on A Coruña. I've been a Debian developer since year 2000 when I was quite more involved than currently (live changes), but I'm always expecting to be able to dedicate more time to the project, I hope this will happen when my two children grow up a little bit.</p>
<p>I had been wanting to send my blog's Debian related posts to the planet but always failed to do so, yesterday I found the planet wiki page and I said... it's so easy that I don't have any excuse not to do it, so here I am.</p>
<p>Oh, BTW... if I ever comment on Debian's anniversary (16th of August) that at Betanzos we are launching <a href="http://www.tesourosdegalicia.com/en/el-globo-de-san-roque-de-betanzos/">a really huge paper balloon</a>, it is not to commemorate Debian's date but in honour of San Roque, even though maybe we should talk to the Pita family to have Debian's logo on it for our 25th anniversary :-)</p>
<h4>Jessie's question</h4>
<p>In Jessie we no longer have update-notifier-common which had the /etc/kernel/postinst.d/update-notifier script that allowed us to automatically reboot on a kernel update, I have apt-file searched for something similar but I haven't found it, so... who is now responsible of echoing to /var/run/reboot-required.pkgs on a kernel upgrade so that the system reboots itself if we have configured unattended-upgrades to do so?</p>
<p>I really miss this stuff, I don't know if it should be on the kernel, on unattended-upgrades or where, but now that we have whatmaps... we need this feature to round it all.</p>
<h4>End</h4>
<p>Well, to finish I just want to say that I'm very happy to be a part of the Debian community and that I enjoy reading you guys on the planet. Thanks a lot to all the Debian folks for making Debian not only a great OS, but also a great community.</p>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com5tag:blogger.com,1999:blog-7692747369842058378.post-20017640559259976112014-12-28T01:44:00.002+01:002015-04-13T20:38:54.598+02:00haproxy as a very very overloaded sslh <p>After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client's IP as the source IP of the packages).</p>
<p>With this two little features, which are available on haproxy 1.5 (Jessie's version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.</p>
<p>Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.</p>
<p>There is however one caveat that I don't like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we'll be taking some risks here which I personally don't like, to me it isn't worth it.</p>
<p>Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we'll have to add to it a routing and iptables setup, I'll describe here the whole setup</p>
<p>Here is what you need to define on /etc/haproxy/haproxy.cfg:</p>
<listing>
frontend ft_ssl
bind 192.168.0.1:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
acl sslvpn req_ssl_sni -i vpn.example.net
use_backend bk_sslvpn if sslvpn
use_backend bk_web if { req_ssl_sni -m found }
default_backend bk_ssh
backend bk_sslvpn
mode tcp
source 0.0.0.0 usesrc clientip
server srvvpn vpnserver:1194
backend bk_web
mode tcp
source 0.0.0.0 usesrc clientip
server srvhttps webserver:443
backend bk_ssh
mode tcp
source 0.0.0.0 usesrc clientip
server srvssh sshserver:22
</listing>
<p>An example of a transparent setup can be found <a href="http://blog.haproxy.com/2013/09/16/howto-transparent-proxying-and-binding-with-haproxy-and-aloha-load-balancer/">here</a> but lacks some details, for example, if you need to redirect the traffic to the local haproxy you'll want to use the xt_TPROXY, there is a better doc for that at <a href="http://wiki.squid-cache.org/Features/Tproxy4">squid's wiki</a>. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won't need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source 0.0.0.0 usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We'll have to get those packets back to haproxy, for that what we'll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables' mangle table with rules marking on PREROUTING but that won't work out if you are having all the setup (frontend and backends) in just one box, instead you'll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:</p>
<listing>
*mangle
:PREROUTING ACCEPT
:INPUT ACCEPT
:FORWARD ACCEPT
:OUTPUT ACCEPT
:POSTROUTING ACCEPT
:DIVERT -
-A OUTPUT -s public_ip -p tcp --sport 22 -o public_iface -j DIVERT
-A OUTPUT -s public_ip -p tcp --sport 443 -o public_iface -j DIVERT
-A OUTPUT -s public_ip -p tcp --sport 1194 -o public_iface -j DIVERT
-A DIVERT -j MARK --set-mark 1
-A DIVERT -j ACCEPT
COMMIT
</listing>
<p>Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you'll have to mark the packages on the OUTPUT chain.</p>
<p>Once we have the packets marked we just need to route them, something like this will work out perfectly:</p>
<listing>
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
</listing>
<p>And that's all for this crazy setup. Of course, if, like me, you don't like the root implication of the transparent setup, you can remove the "source 0.0.0.0 usesrc clientip" lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you'll be able to run haproxy with dropped privileges and you'll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.</p>
<p>Hope you like the article, btw, I'd like to point out the main difference of this setup vs sslh, it is that I'm only sending the packages to the ssl providers if the client is sending SNI info, otherwise I'm sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.</p>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com3tag:blogger.com,1999:blog-7692747369842058378.post-20300959436635638122014-12-11T13:49:00.001+01:002014-12-11T13:49:50.829+01:00Add a Debian source repository to your ubuntu<p>Debian does all the work that Ubuntu bases its system on, so... if you want to add our latest Debian source package repository to ubuntu, so that you can compile and use it under Ubuntu, that should be as easy as to add to your /etc/apt/sources.list file:</p>
<listing>
deb-src http://ftp.debian.org/debian/ testing main
deb-src http://ftp.debian.org/debian/ unstable main
deb-src http://ftp.debian.org/debian/ ../project/experimental main
</listing>
<p>Well, just the line you want, if you want the future version of Debian (currently Jessie) use the testing one, if you want current develpment, use unstable, and sometimes you even get a experimental version if you want to test really bleeding edge versions.</p>
<p>The problem here is that after you add the lines you want to your sources.list file and you run your</p>
<listing>
apt-get update
</listing>
<p>You will end with a GPG error because the ubuntu's apt-key keyring doesn't know about Debian's keys, so... we'll have to run a few commands to get rid of this, but first we must locate the needed key for example <a href="https://ftp-master.debian.org/keys.html">here</a></p>
<listing>
wget https://ftp-master.debian.org/keys/archive-key-7.0.asc -O -|apt-key add -
apt-get update
</listing>
<p>Now hopefully your Ubuntu will recognize your Debian sources and you'll be able to get your Debian favourite source into your good old Ubuntu by doing something like this as user (we'll use fakeroot):</p>
<listing>
apt-get install fakeroot
apt-get apt-get build-dep your_favourite_package
apt-get source your_favourite_package
cd your_favourite_package_source_dir
dpkg-buildpackage -rfakeroot
</listing>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0tag:blogger.com,1999:blog-7692747369842058378.post-74801871117542904222014-12-11T00:02:00.000+01:002014-12-12T14:46:23.189+01:00Squid proxy being transparent also for ssl and other tcp connections by using ssl bump<p>A long time ago I was trying to have a transparent proxy setup by using squid, but squid traditionally only knows about http, ftp and https in explicit proxy mode. There is no way to handle non http (for example https) transparently on a traditional setup, so that setup was not what I was looking for.</p>
<p>After looking into TLS SNI and other things, trying even to implement things like that on some tools like socat that would proxify things for squid, I discovered ssl bump on squid3, which just does all the magic I was looking for.</p>
<p>Squid traditionally has several ways of listening to requests, one of which is the explicit proxy port (http_port 3128) and the other typical ones are for transparent proxy, this is indicated by flags:</p>
<ul>
<li>transparent: used to intercept server queries and parse http host headers to forward them through squid to the servers
<li>tproxy: used to spoof outgoing address to that of the client, so that squid is really transparent
</ul>
<p>Well, none of this allows us to forward https or tcp requests (say for example ssh, imap, ...) for a client that doesn't have explicit proxy support. Unluckily this means that a transparent proxy using this technology nowadays is of no use.</p>
<p>This is where ssl bump comes to the rescue, the old transparent mode of squid, which is currently called intercept, on squid3 has an extra flag: ssl-bump, which has the power of being able to intercept ssl traffic and things like that, allowing squid to cache https webs, but to do this, one has to create a Certificate Authority and the clients must trust this CA that squid uses to issue certificates for the web sites we want to visit.</p>
<p>However ssl-bump can work without issuing these certificates and in this case squid won't mess with https requests, but it will still allow us to do a pretty neat thing, which is to forward all tcp connections from any client (doesn't even have to know what a proxy is) transparently. In this case what squid does is to ask netfilter (iptables) where was the connection that squid is handling supposed to go, and squid makes this connection for the client so that it starts talking to the other end with all the traffic going through squid.</p>
<p>One might ask himself why would he want this traffic on squid, well, you'll have all of squid features, you can control the speed, you get all the typical logs and acls, ...</p>
<p>Of course that if you don't want this you can go with iptables and traffic shaper and that's good as well.</p>
<p>So... you like this idea? Well, then if your distro has squid compiled with ssl support you can read the config section, but if you are (like me) using Debian, you must recompile your squid3 with ssl support. Debian doesn't compile squid3 with ssl support as there are problems between openssl license and squid3 one (squid developers are looking forward to somebody porting the code to gnutls :-)</p>
<br />
<b>Rebuilding squid3 with ssl-bump</b>
<p>Well, to rebuild the squid3 package with ssl support you must install the needed packages:</p>
<listing>
apt-get install fakeroot libssl-dev
apt-get build-dep squid3
</listing>
<p>There you may find that your distro doesn't have all the packages needed to compile, like for example libecap2-dev, in this case you'll have to apt-get source these packages, compile and install them like we'll do with squid3</p>
<p>And then do a few things as user (we'll use fakeroot which we have just installed) I've tested this using squid3 from Debian testing (the next version, which will be Jessie)</p>
<p>Start with:</p>
<listing>
apt-get source squid3
</listing>
<p>And then we'll edit a couple of files on the source, so cd into the source dir and in the debian/control file you must add to the build-depends: libssl-dev and in the debian/rules file you must add this configure options:</p>
<listing>
--enable-ssl \
--enable-ssl-crtd \
</listing>
<p>One can then run debchange -i and add something like this on the changelog:</p>
<listing>
Build with --enable-ssl and --enable-ssl-crtd.
</listing>
<p>Now the source is ready to build using your favourite command, like for example:</p>
<listing>
dpkg-buildpackage -rfakeroot
</listing>
<p>and at last install the needed packages using dpkg -i</p>
<br />
<b>Configuration</b>
<p>The main configuration file is stored at /etc/squid3/squid.conf, even though it can be split into separate files. On these files we must set as SSL_ports all the ports for the protocols that we want to allow through squid using an acl like this:</p>
<listing>
acl SSL_ports port 1935 # rtmp
acl SSL_ports port 5222 # xmpp
acl SSL_ports port 5223 # xmpp over ssl
acl SSL_ports port 5228 # googletalk
acl SSL_ports port 5242 # viber
acl SSL_ports port 4244 # viber
</listing>
<p>And we'd typically want to define several proxy ports, one for explicit http, another one for http interception (classic transparent proxy) and our ssl-bump port, like this:</p>
<listing>
http_port 3128
http_port 80 intercept
https_port 3127 intercept ssl-bump generate-host-certificates=off cert=/etc/squid3/squid.pem
acl ssl-bump_port myportname 3127
always_direct allow ssl-bump_port
</listing>
<p>The direct thing is to force the squid server to send ssl-bump requests directly and not through other caches, as this wouldn't work at all. The certificate is needed even though we won't be using it, just generate one using make-ssl-cert from the ssl-cert package or plain openssl x509 power.</p>
<p>Make sure we have all our client networks listed on our localnet acl and that they are allowed to use the proxy:</p>
<listing>
acl localnet src 127.0.0.1/32 192.168.0.0/16
http_access allow localnet
</listing>
<p>Several misc settings like: make sure that ssl-bump doesn't generate certificates for any host at all (just in case), set the language for the errors and allow a good number of filedescriptors so that we don't run out of them</p>
<listing>
ssl_bump none all
error_default_language es
max_filedescriptors 8192
</listing>
<p>And that is pretty much what is needed on the squid configuration, of course you can do this and more writing it all in different ways. I have found out that the directory for the ssl certs is not created by default, we must run: /usr/lib/squid3/ssl_crtd -c -s /var/lib/ssl_db</p>
<br />
<b>Routing it all through squid</b>
<p>You may be wondering how does all the traffic that we are going to allow through squid get to it. Well, the answer is easy if you have ever configured some kind of transparent proxy or similar, we do it through iptables, in the PREROUTING chain of the nat table we send things to the ssl-bump port or to our transparent proxy port as we like and then we open the ports on which we are serving all this on the INPUT chain of the filter table with something like this which can be loaded with iptables-restore myiptables.cfg:</p>
<listing>
*nat
:PREROUTING ACCEPT
:POSTROUTING ACCEPT
:OUTPUT ACCEPT
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 1935 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 3389 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 5222 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 5223 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 5228 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 5242 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 4244 -j REDIRECT --to-ports 3127
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport 80 -j REDIRECT --to-ports 80
#This is not for squid but I like these too (local ntp and dns cache)
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p udp --dport ntp -j REDIRECT --to-ports 123
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p tcp --dport domain -j REDIRECT --to-port 53
-A PREROUTING -i eth0 -s 192.168.0.0/16 ! -d 192.168.0.0/16 -p udp --dport domain -j REDIRECT --to-port 53
COMMIT
*filter
:INPUT DROP
:FORWARD DROP
:OUTPUT ACCEPT
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A INPUT -i eth0 -p tcp --dport http -j ACCEPT
-A INPUT -i eth0 -p tcp --dport 3127:3128 -j ACCEPT
-A INPUT -i eth0 -p udp --dport bootps -j ACCEPT
-A INPUT -i eth0 -p udp --dport ntp -j ACCEPT
-A INPUT -i eth0 -p udp --dport domain -j ACCEPT
-A INPUT -i eth0 -p tcp --dport domain -j ACCEPT
COMMIT
</listing>
<p>I believe that's all, of course this is generally speaking and you'll have to adapt it to fit your needs, but there it is. Just add a dhcp server for convenience and ntp and dns servers so that you don't need to forward those protocols and in order to save more bandwith using the dns server cache and local ntp answers and you're done.</p>
<p>Note that these iptables are dropping all forwarding as in the example the kernel doesn't need to forward anything, squid does it, and for dns and ntp we are using local servers. Of course that if you want to forward some udp traffic, you'll need to add forwarding rules for that.</p>
Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com3tag:blogger.com,1999:blog-7692747369842058378.post-38445580185930982022014-12-09T00:49:00.000+01:002014-12-11T00:07:54.926+01:00Transitioning from 0xF6A32A8E to 0xD876D5A3 (moving to stronger cryptography)<listing>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA256
Hi!
As Debian is retirying good old 1024 bit keys I'm transitioning to a new
4096 bits key.
I still don't have many signatures on my new key, but it is signed with the
old one wich to date isn't known to be compromised.
My old key was:
pub 1024D/F6A32A8E 2000-09-16 Santiago Garcia Mantinan (manty) <manty@debian.org>
Primary key fingerprint: 3F0A 12FC 0B55 A917 D791 82D3 72FD C205 F6A3 2A8E
My new key is:
pub 4096R/D876D5A3 2014-10-06 Santiago Garcia Mantinan (manty) <manty@debian.org>
Primary key fingerprint: 06A3 E576 0F61 1B4B B1A9 0E68 B868 8CA3 D876 D5A3
I hope to get this new key on Debian's keyring before the end of the year
and hopefully contribute to a stronger keyring.
Regards.
Santiago García Mantiñán (manty)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iEYEARECAAYFAlSGNrIACgkQcv3CBfajKo5yRACguwN5wahVJrIli3FGde9KS83W
TbYAn2NAf7JZuG9WiiKAX/7DKfc/1J33iQIcBAEBCAAGBQJUhjayAAoJELhojKPY
dtWjjE8QAM87bl3z00YgykCGCH7FElqivOesJ3Op16tDU2/o0AP249iA0rRToH9q
SR0Ik1oiziWjh7ccUDHmVeIgV2wpso8wcKmJuZbOqQJ4sVvIzRd0IN2G0kBfyFDn
+ff/J6aGcDCFLp0nIStEJiycKd4UWcqAoA+RB+wpBwIqH/yXw5l8mEyv7XHwfeHT
3D2r9ocpVdRu9QzElxs8sO7cXOtJ6+wKUurGaDobAIZC/1GIF6UlcAfaV5y0uxYa
ZguAye8ff4ggrFxH/dxzTrC/ushIXn7MkjQSIphbkHcpbUwRjXaEBRL49WLnbHpZ
lmKfmnUeW8a3zkuTQEfJV3i97k5WHLV/AQpfIbAbTFivXleDBIbf6XIvV3EOVPrr
nGN1S625Qj9lrcHpIK1PB8xElqJ31bUss62nlFghaEzEq8eLalJRDMRfOddj/NtN
4Ig7DdJSXUbCjxYVkBPzaRVoTT92sQHJ/c8siH8F5I2+YcwTNTQaOXf3LCfvvKd2
a+x1wc9FlJgi7hrRFiV63MQN68WIDzY2g+irWQSgzLFsA4k4RYGgC+6Ap5R3umln
EnBCR6vRmrONS92bb0SuMis1D9WOz62z34OSMNh68Mg1BaWlM2lIczcdTyoIcKLM
c0PWnsWTRYjQn+QbAH35YnmB395Z8bNoL2k/XhkuEEVCqXK/hWsm
=Tm5k
-----END PGP SIGNATURE-----
</listing>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0tag:blogger.com,1999:blog-7692747369842058378.post-8323926391306705062014-07-06T01:37:00.000+02:002014-12-11T00:08:25.536+01:00Hibernating on power button with DebianIt's been years since I started hibernating my machine by pressing the power button instead of halting it. This started at work, my mate Ramón wanted to get all machines automatically hibernated through the night and was doing some tests on the old Windows XP, I didn't knew how to hibernate at Linux at that time, so I started taking a look at it, saw it was easy and worked flawlessly, and so I've been doing it since then (how the Windows thing ended is another story with another end).<br />
<br />
Lately what I had done was install acpi-support package and at /etc/acpi/events/powerbtn-acpi-support file I changed its action to: <listing>action=/etc/acpi/sleep_suspend.sh suspend</listing>
Which basically means when you press the power button hibernate (suspend to disk).<br />
<br />
This used to work ok on most of my devices, however an old AMD socket 939 board did two hibernations each time the power button was pressed, and you could see the machine go to hibernate when you were waking it up. It seems I even had blogged about this <a href="http://manty.net/2011/11/executing-with-keybutton-how-to.html">here</a>.<br />
<br />
My latest solution for this problem seems that it was as simple as to do this little change to /etc/acpi/sleep_suspend.sh:
<listing>
29c29
< pm-hibernate
---
> (sleep 1;pm-hibernate) &
</listing>
I don't remember how I ended up with this solution instead of my first locking solution, but it did work for my desktop until it left sysvinit in favour of upstart but that's another story.<br />
<br />
The thing is that ever since kernel 3.14 started to hit Debian I found the same sleep two times problem on my old laptop as well, and this still happens on 3.15.3. I have applied my solution to sleep_suspend.sh and it still works, but... I think I'll have to pick the details on this and file a bug, the question is... against Debian kernel? or try to post again to the linux-acpi mailing list <a href="http://www.spinics.net/lists/linux-acpi/msg33317.html">like I did on 2011</a> and see if we get better results this time?<br />
<br />
We'll see tomorrow, now it is time to go to bed, night.
Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0tag:blogger.com,1999:blog-7692747369842058378.post-10675272726216060092014-04-20T02:16:00.000+02:002014-04-20T02:18:17.357+02:00Wifi repeater (AP and STA with one radio) using DebianIt is a long time since I last "repeated" a radio using my laptop as I typically use a OpenWRT small device, so... I had to look it all up again, hey, we are on the nl80211 days.<br />
<br />
So... I tried to look this up starting from one of my working setups, a OpenWRT device, and what did I find there? I found they are using a patched wpa_supplicant which says:
<br />
<br />
<div style="text-align: center;">
<b>-H = connect to a hostapd instance to manage state changes
</b></div>
<div style="text-align: center;">
<br /></div>
However this patch doesn't seem to have reached upstream, so... is it needed? Well I don't think it is, at least one can make a setup which works without it. BTW, if somebody can clarify on this option and why it hasn't reached upstream it would be great.
<br />
<br />
Well, here is my setup which seems to work OK on my Debian Jessie.<br />
<br />
I'll be using hostapd and dnsmasq, what I do is disable them so that they are not started on boot and I start them whenever I need them (use update-rc.d for this or any other method you like).<br />
<br />
I have defined an interface (ap0) which is not automatic or hotplug and which I ifup manually when I want to repeat a wifi:<br />
<listing>iface ap0 inet static
hwaddress XX:XX:XX:XX:XX:XX
address XX.XX.XX.XX
netmask 255.255.255.0
pre-up iw phy phy0 interface add ap0 type __ap || true
up cp /etc/hostapd/hostapd.conf.nochannel /etc/hostapd/hostapd.conf
up iw dev ath0 info|sed -n "s/.*channel \([^ ]*\) .*/channel=\1/p" >> /etc/hostapd/hostapd.conf
up /etc/init.d/hostapd start
up /etc/init.d/dnsmasq start
up iptables-restore /etc/iptables.masq
up echo 1 > /proc/sys/net/ipv4/conf/ap0/forwarding;echo 1 > /proc/sys/net/ipv4/conf/ath0/forwarding
down echo 0 > /proc/sys/net/ipv4/conf/ap0/forwarding;echo 0 > /proc/sys/net/ipv4/conf/ath0/forwarding
down /etc/init.d/dnsmasq stop
down /etc/init.d/hostapd stop
post-down iw dev ap0 del || true
</listing>On the interfaces file what I do is: I create the new AP interface, set up a hostapd.conf file adding the current channel for my client interface (ath0), start hostapd and dnsmasq and set up masquerading and forwarding.<div>
<br />
The /etc/hostapd/hostapd.conf.nochannel file is a simple config file, something like this works:
<br />
<listing>interface=ap0
ctrl_interface=/run/hostapd-phy0
driver=nl80211
ssid=Whatever
hw_mode=g
wpa=2
wpa_pairwise=CCMP
wpa_passphrase=BlaBlaBla
country_code=ES
ignore_broadcast_ssid=0
</listing>
And of course you can add all the parameters you want, for example, for my 802.11N radio I use:
<listing>wmm_enabled=1
ieee80211n=1
ht_capab=[HT40+][SHORT-GI-40][DSSS_CCK-40]
</listing>
I won't get to dnsmasq details, I don't use it much, but I think I should know it better, I only added this couple of lines to the default config:
<listing>interface=ap0
dhcp-range=StartingIP,EndingIP,12h
</listing>
Well, I guess that pretty much is it, as for the iptables rules... you know, allow forwarding from your AP to your client wifi and add a POSTROUTING with -j MASQUERADE to traffic going out and that's it.
<br />
<br />
Hope you find this usefull, and if you want to enlighten the -H parameter history feel free to comment.
<br />
<br />
What I think after reading the commit (https://dev.openwrt.org/browser/trunk/package/network/services/hostapd/patches/453-ap_sta_support.patch?rev=37738) is that they are having wpa_supplicant reload any time the client reconnects or whatever, but this can also be done on wpa_cli, so that must be why it hasn't reached upstream (but that's just what I'm guessing, any light out there?).
<br />
<br />
It feels nice to write after such a long time :-) Regards.</div>Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0tag:blogger.com,1999:blog-7692747369842058378.post-55655294296579651122013-04-26T00:22:00.001+02:002013-05-01T10:14:48.007+02:00Migrating from 32 to 64 bits (going from Debian i386 to amd64)The new version of Debian, due in some days (Debian 7.0, Wheezy) comes with multiarch support, which doesn't officially support migrating from 32 to 64 bits (at least I think so), but I have managed to migrate from i386 to amd64 with just one reboot to change kernels.<br />
<br />
This is something that is not officially supported and that shouldn't be done if you don't have enough knowledge of the Debian system, as you can have some problems that may need some manual things done. I'll try to explain what I did but this is not a step by step guideline and is prone to fail in most cases needing you to install things by hand or similar things.<br />
<br />
One more warning... you should be on Wheezy before trying to do any of this, trying to go from Squeeze i386 to Wheezy amd64 has given me some heavy problems, I also managed to do it but with too much manual intervention.<br />
<br />
<b>As always, I'm not liable of any problem you may have trying to do any of this.</b><br />
<br />
The first thing I did as the machine was a pre i386 machine with an i386 kernel, was to install linux-image-amd64 in order to have a 64 bits kernel, this kernel will allow us to run both i386 and amd64 apps. Then I rebooted in order to run this kernel. It may be a good idea to install some extra packages like deborphan and binutils just in case we may need them later.
To allow us to install the amd64 packages we must enable the arch by using:
<br />
<listing>dpkg --add-architecture amd64
apt-get update</listing>
After that we can start installing the amd64 packages, what I did was:
<listing>apt-get install libc6:amd64</listing>
And then
<liaring>apt-get install apt:amd64</liaring>
There I needed to answer (Yes, do as I say!) as we are installing apt and this will uninstall apt which is something you wouldn't usually want, but we want it here as we are installing apt :-)<br />
<br />
Then I went for a
<br />
<listing>apt-get install dpkg:amd64</listing>
even though on some other tests dpkg seemed to be amd64 already after the apt installation, so maybe this is not needed.
Anyway, after that it looks like doing a
<listing>apt-get -f install</listing>
installs all the amd64 basic dependencies. But it seems that this command can break sometimes, or remove some packages on other times, what I did was note down the removed stuff in order to install them afterwards (remember that this will only remove the binaries, the config files will stay there so if you install them later they'll inherit that config, which is what we want). I had to run this command several times until I got it to end.<br />
<br />
Getting this ended doesn't mean that we are done. Doing a
<listing>dpkg -l|grep :i386</listing> will let us know what packages still remain on i386 arch. You can install them again with apt and they should be reinstalled on amd64 arch.
After that you can install all the stuff that had previously been removed and afterwards do some cleaning with
<listing>apt-get autoremove --purge </listing>
If you have finished cleaning up using deborphan or whatever and you do not see any i386 files on dpkg... you are done :-)<br />
<br />
<br />
<br />
<b>TROUBLESHOOTING:</b><br />
<br />
I had a problem with findutils as it seems that Debian dpkg system depends on find a lot when it tries to install findutils. To solve this I went to /var/cache/apt/archives/ and did
<listing>dpkg -x findutils... /tmp/findutils</listing> and then had to cp the find binary to its place and then <listing>dpkg -i findutils...</listing>
I had similar problem with bash and dash which go together, again I had to apply the same solution as on the findutils case.<br />
<br />
On some heavy problems you may end up with dpkg not working at all, you can then use
<listing>ar x package.deb</listing> to extract that data.tar.?z and then tar to extract the binaries of the package out if there.<br />
<br />
A useful command is
<listing>dpkg -l|grep ^rc</listing> to see the packages that have been removed but still have configs around.<br />
<br />
This is something I used to try to identify targets to install:
<listing>deborphan -a -n|sed -n "s/[^ ]* *\(.*\):i386/\1:amd64/p"</listing>
Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com1tag:blogger.com,1999:blog-7692747369842058378.post-41103557110614455332011-01-03T10:33:00.004+01:002013-05-01T10:17:36.209+02:00DFS access under Debian GNU/LinuxIf you try to access a DFS resource using cifs you may bet this error:<br /><br />mount error(11): Resource temporarily unavailable<br />Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)<br /><br />I used to think that DFS access was still not mature enough until today, when I googled for this error and found that if I installed keyutils this error would disapear and everything would work ok. I thought this would not apply to me but it looks that it did.<br /><br />So, this may apply to you as well if you are running a recent enough Debian and/or kernel, I'm currently running Squeeze with it's 2.6.32 kernel.<br /><br />If it doesn't: check for CONFIG_KEYS=y and CONFIG_CIFS_DFS_UPCALL=y on your kernel config and for recent cifs-utils and keyutils packages (squeeze version works ok)<br /><br />Hope this helps.Santiago García Mantiñánhttp://www.blogger.com/profile/04766337312954063775noreply@blogger.com0