Mostrando entradas con la etiqueta Linux Admin. Mostrar todas las entradas
Mostrando entradas con la etiqueta Linux Admin. Mostrar todas las entradas

martes, 1 de julio de 2025

openwrt-reinstall revisited, now with more power :-)

It is almost four years since I wrote about openwrt-reinstall. That was the first public version of this script that tries to get openwrt updates a little bit closer to the big distros trying to achieve something similar to what we get when we do an apt dist-upgrade or similar.

I'm talking about an script that will reinstall all your packages when you do a sysupgrade of your openwrt to jump to the newest version and thus allowing you to do the jump without loosing your functionalities or your configs. Personally this has been the biggest problem on OpenWRT for me since I started playing with it back on the Linksys WRT, the Foneras and all those routers that now belong to our beloved museum :-) and now with openwrt-reinstall I really love to update, please read the original article to know more about it.

Anyway, I'm writing today to announce that I have added two new parameters:

  • check compares the package status that we have specified on our reinstall.conf with the current package status and will tell you if you have installed any new package so that you can add it to reinstall.conf and have it automatically reinstalled
  • chen calls check and then calls enable so that you get both the info from check and you enable the service

So the idea is that before you do a sysupgrade you run: /etc/init.d/reinstall chen and you get the info on the packages you have added to your system since your last reinstall and enable reinstall so that after you add those new packages to /etc/reinstall.conf and you run sysupgrade /tmp/openwrt-whatever-sysupgrade.bin you end up with your updated system and everything configured like it was before.

If this is your first time running reinstall and you do a "check" it will let you know all the packages you have manually installed and if you do a "chen" it will also enable itself to do the the reinstall of the packages you want. The list of the packages you want should be added to /etc/reinstall.conf. So, after you reboot into your updated OpenWRT (24.10.2 was released last week) it will install those packages and reboot again into your updated system which will be just like you wanted. Just make sure you update reinstall and use the latest version and configure it to fit your needs ;-)

As always, I hope you enjoy it and if you have any sugestions you can write them on the comments or even better through bugs or pull requests or whatever on github.

sábado, 12 de marzo de 2022

tcpping-nmap a substitute for tcpping based on nmap

I was about to setup a tcpping based monitoring on smokeping but then I discovered this was based on tcptraceroute which on Debian comes setuid root and the alternative is to use sudo, so, anyway you put it... this runs with root privileges.

I didn't like what I saw, so, I said... couldn't we do this with nmap without needing root?

And so I started to write a little script that could mimic what tcpping and tcptraceroute were outputing but using nmap.

The result is tcpping-nmap which does this. The only little thing is that nmap only outputs miliseconds while the tcpping gets to microseconds.

Hope you enjoy it :-)

lunes, 3 de mayo de 2021

Windows and Linux software Raid dual boot BIOS machine

One could think that nowadays having a machine with software raid doing dual boot should be easy, but... my experience showed that it is not that easy.

Having a Windows machine do software raid is easy (I still don't understand why it doesn't really work like it should, but that is because I'm used to Linux software raid), and having software raid on Linux is also really easy. But doing so on a BIOS booted machine, on mbr disks (as Windows doesn't allow GPT on BIOS) is quite a pain.

The problem is how Windows does all this, with it's dynamic disks. What happens with this is that you get from a partitioning like this:

/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sda2 206848 312580095 312373248 149G 7 HPFS/NTFS/exFAT /dev/sda3 312580096 313165823 585728 286M 83 Linux /dev/sda4 313165824 957698047 644532224 307,3G fd Linux raid autodetect

To something like this:

/dev/sda1 63 2047 1985 992,5K 42 SFS /dev/sda2 * 2048 206847 204800 100M 42 SFS /dev/sda3 206848 312580095 312373248 149G 42 SFS /dev/sda4 312580096 976769006 664188911 316,7G 42 SFS

These are the physical partitions as seen by fdisk, logical partitions are still like before, of course, so there is no problem in accesing them under Linux or windows, but what happens here is that Windows is using the first sectors for its dynamic disks stuff, so... you cannot use those to write grub info there :-(

So... the solution I found here was to install Debian's mbr and make it boot grub, but then... where do I store grub's info?, well, to do this I'm using a btrfs /boot which is on partition 3, as btrfs has room for embedding grub's info, and I setup the software raid with ext4 on partition 4, like you can see on my first partition dump. Of course, you can have just btrfs with its own software raid, then you don't need the fourth partition or anything.

There are however some caveats on doing all this, what I found was that I had to install grub manually using grub-install --no-floppy on /dev/sda3 and /dev/sdb3, as Debian's grub refused to give me the option to install there, also... several warnings came as a result, but things work ok anyway.

One more warning, I did all this on Buster, but it looks like for Grub 2.04 which is included on Bullseye, things have gotten a bit bigger, so at least on my partitions there was no room for it, so I had to leave the old Buster's grub around for now, if anybody has any ideas on how to solve this... they are welcome.

miércoles, 20 de mayo de 2020

Trabajando en remoto sobre Debian

La situación actual nos ha llevado a muchos a trabajar en remoto, algo para lo que hay muchas opciones, hablemos de alguna de ellas.

SSH

Es el sistema de acceso remoto por excelencia, lo usamos todos desde siempre, o al menos desde que la seguridad en la red importa, aunque no siempre fuera así, los ancianos del lugar usábamos el telnet, pero mejor no hablar de temeridades, no? ;-)

El secure shell como su propio nombre indica está diseñado para acceder a un shell, es decir, para acceso remoto a un entorno modo texto, pero como ya sabréis permite hacer túneles de todo tipo, desde puertos hasta forwarding de clientes X.

Si bien el SSH nos permite acceder a nuestros clientes X y traerlos hasta el servidor local de nuestro ordenador en casa, resulta que las X no están diseñadas para que el cliente y el servidor estén separados por las latencias de una wan por medio, por lo que aunque tengamos 1 giga de ancho de banda, nuestras aplicaciones X en remoto irán muy lentas, por eso... veamos que podemos utilizar para la parte gráfica...

x2go

Seguro que es el sistema más currado y más complejo para acceso remoto, soporta no sólo Linux, pero... si lo probáis veréis que hace muchas cosas por su cuenta sin contárnoslas, así que uno no deja de preguntarse... ¿para qué es todo esto? Lo tenemos soportado en Debian con paquetes tanto para hacer de servidor como para cliente, aunque ya os digo que a mi me pareció demasiado complejo y poco transparente.

Sin embargo si miramos debajo del capó vemos que utiliza la tecnología de NX que me parece más sencilla y entendible.

NX

Se trata de una tecnología que nos permitirá la utilización de aplicaciones nativas X tal cual, se basa en la proxificación de los clientes X de modo que los eventos no tengan tanta latencia y por lo tanto todo vaya mucho más fluido. Además añadiremos un servidor X en el lado remoto que nos permitirá tener una mejor respuesta y sesiones permanentes.

Su uso es sencillo, veamos un ejemplo, la idea es que accedemos al sistema remoto via ssh -L 4008:localhost:4008 host.remoto (forwardeamos el puerto 4008 que usaremos para el proxy de NX que correremos con :8 o sea usando el puerto 4008) y allí ejecutamos un proxy al que nos conectamos desde localhost a través de este puerto que hemos forwardeado. Esa es la parte de proxificado, pero vamos a añadir el agente que lo que hace es añadir un servidor X local sobre que lanzaremos las aplicaciones X y que nos dará también una sesión permanente que nos permite desconectarnos y conectarnos cuando queramos, veamos esto:

Remoto: nxproxy -C :8 & Local: nxproxy -S localhost:8 & Remoto: nxagent -display nx/:8 -geometry 1276x976 :9 & DISPLAY=:9 startlxqt

En el ejemplo arrancamos el lxqt, pero se puede arrancar lo que sea. Esta sesión sera permanente, ya que como dijimos estamos arrancando un servidor X en el equipo remoto, en este caso el DISPLAY será :9, contra el que irán las aplicaciones X. Aunque apaguemos el equipo local, se corten las comunicaciones o lo que sea, podremos reconectar. Para esto sólo hay que rearrancar las partes del proxy remoto y local y luego avisar al nxagent de que queremos que nos vuelva a mandar el display utilizando por ejemplo:

killall nxagent -HUP

VNC

Que decir de VNC, ha estado ahí desde hace muchos años.

Tenemos el servidor al estilo windows, en el paquete x11vnc que podemos arrancar así:

x11vnc -rfbport 5900 -bg -o %HOME/.vnc/x11vnc.log.%VNCDISPLAY -rfbauth ~/.vnc/passwd -display :0 o el tigervnc-scraping-server que al igual que el anterior nos permitirá acceder vía cliente VNC a unas X que estén corriendo, aunque ahora tenemos también la extensión de X tigervnc-xorg-extension que nos dará la misma funcionalidad pero de una manera mucho más eficiente. Estos están bien para ver lo que hay en ejecución en la pantalla de un equipo y por ejemplo ofrecer ayuda remota.

Además tenemos el tigervnc-standalone-server y el tightvncserver que lo que nos permiten es tener todas las sesiones X que queramos (ya que no van atadas a nuestra gráfica ni nada) y accederlas en remoto vía VNC, y por supuesto varios clientes específicos de vnc como tigervnc-viewer y xtightvncviewer además de otros que soportan VNC y otros protocolos.

El handicap de siempre del VNC es que todo va en claro, nada va cifrado, así que necesita sí o sí de SSH o similar para cifrar los datos.

RDP

Este protocolo diseñado por Microsoft tiene ahora tanto servidores como clientes para Linux, requiere mucho menos ancho de banda que VNC y soporta diversos tipos de cifrado, tanto cifrado propio como incluso una capa de TLS.

Al igual que en el caso de VNC tenemos también servidores para acceso a un servidor X ya existente, como el freerdp-shadow-cli en el paquete freerdp2-shadow-x11. Lo podemos lanzar con esta orden para acceder a las X corriendo en :0:

DISPLAY=:0 freerdp-shadow-cli /port:12345

Ya sabéis, como en el caso del VNC, es muy útil para ayuda remota. Si bien es conveniente tener en cuenta este bug ya que hace que nofuncione la autenticación mientras que no lo arreglemos, así que o bien recompilamos o bien añadimos el parámetro -auth, pero entonces cualquiera que tenga acceso al puerto podrá tomar control de la sesión X.

También tenemos clientes como el clásico rdesktop o el xfreerdp del paquete freerdp2-x11 y otros clientes que soportan VNC, RDP y más, como vinagre, de GNOME o remmina.

Pero si lo que queremos es un acceso remoto a un entorno de trabajo Linux persistente tendremos que fijarnos en xrdp, todo un servidor rdp para dar acceso a tantas sesiones de escritorios Linux como queramos, estas sesiones serán permanentes y podremos conectarnos y desconectarnos de las mismas cuando queramos, además soporta sonido, aunque eso requerirá que compilemos los módulos siguiendo estas instrucciones, la reproducción de sonido es estándar, por lo que funcionará en cualquier cliente, pero si queremos mandar nuestro micro al server deberemos utilizar por ejemplo el paquete de rdesktop de buster (lo he probado y funciona) o algún otro compatible, ya que han hecho una implementación no estándar :-(

No voy a hablar de más protocolos (que los hay) pero si quería hablar de algo que me parece muy interesante, un potente cliente web de todos estos protocolos y más...

Guacamole

Esto si que no lo tenemos en Debian, aunque hubo algún intento de paquetización antiguo y probablemente os podáis encontrar todavía por ahí los paquetes viejos, no os los aconsejo porque tienen varios bugs de seguridad. Este cliente es bastante complejo, con diversas partes, basado en java, requiere al menos un tomcat para mover el servidor, ... pero a cambio tendremos acceso a servidores RDP, VNC, ssh, ... con seguridad de dos factores en varios estilos y colores y el cliente no necesitará nada más que un navegador web, algo que nos puede aligerar los requisitos para los trabajadores que tengan que acceder en remoto.

Bueno, eso es todo lo que se ocurre ahora mismo, podéis sugerir otras ideas en los comentarios.

Saludos.

martes, 10 de septiembre de 2019

mdadm: how to add a disk to do a replacement

When playing with two disks raid 1 devices it is typical that when one drive fails you just remove it and replace it with a new one and that's it.
But... what if the drive that is left finds an error when you are reconstructing the array? Then... you have a problem.
So when replacing a disk that is starting to fail... it would be better to use the two disks instead of just discarding one.
Luckily the Linux software raid system allows you to add a third disk and sync the array to this disk from the other two, this way, you have two disks to feed the new one and if you are lucky enough you get a good copy of each of the sectors of the raid 1 array to finish the job.
The commands to do this would be... first to add the new disk:
mdadm --add /dev/md1 /dev/sdc1 which just adds it as an spare, but then you tell Linux you want it to use the three disks as active disks like this: mdadm --grow /dev/md1 -f -n 3 When this finishes you should have all the three disks on the array being active, or maybe you get failed drives in the way, but hopefully you get the new drive with a full copy of the data, so... all you have to do is get back to the two disks setup, for this if you don't have the failed drive marked as such... you fail it: mdadm --fail /dev/md1 /dev/sda1 and then you remove it from the array like this: mdadm --remove /dev/md1 /dev/sda1 and put the array in the two disks mode like it was before: mdadm --grow /dev/md1 -f -n 2 And that's it :-)
All this commands were tested on a Debian 10 (Buster) setup, hope they help you.
Regards.

domingo, 22 de marzo de 2015

Hard asterisk times (or how sip made me unhappy till I fixed this peer definition)

It must be that I no longer touch asterisk like I used to do that it took me a while to write a type=peer section this last days.

At my first attempt copying the entry from an old peer definition I was getting a "404 Not Found" message with the asterisk server not even trying to authenticate to the peer when I was trying to make a phone call. I thought it was a problem with my asterisk not wanting to authenticate but it wasn't. What should had happen is that the asterisk should have gotten a "401 Unauthorized" message, and then it would try to authenticate.

After reading a lot I came to the solution by myself comparing asterisk messages with a csipsimple client that I had running, my asterisk was saying something like...

From: "Anonymous" ;tag=xxxx

While the csipsimple client has the server (IP or hostname) specified instead of anonymous.invalid. This first problem was solved with a fromdomain=whateveryourpeerexpects line on the peer definition.

So, I then got the 401 message and the asterisk was trying to authenticate, but this server was expeting to have an authuser on the registration (what is also called the digest username, ...) and even though on asterisk sip.conf doc there are examples on how to use authuser on "register" commands there is none explaining how to do that on a peer definition, I got to a lot of doc explaining how to do that with ways that people was saying that weren't working, patches for asterisk, ... I tested and tested and nothing worked.

I was almost going to go to bed again without fixing this (and it is about time 2AM already) when I started to test things by myself and found that it was defaultuser where I shoud specify this authuser, in fact I had tested this already but it was when I was having the 404 error, so it wouldn't work until I fixed that.

If you read the sip.conf doc, you'll find that:
;defaultuser=yourusername ; Authentication user for outbound proxies
which is quite clear and it's why I had tested it at first, but getting the 404 message made it not work, so in the end my peer looks like this:
[thepeer] type=peer host=thehostip nat=no disallow=all allow=ulaw allow=alaw fromdomain=thehostip defaultuser=theauthuser fromuser=theotheruser secret=thepassword :-)

domingo, 21 de abril de 2013

Getting a file out of your squid cache

I suppose there may be some tools out there to do this, maybe even the squidclient can do this, but hey... doing it with a script is always funnier, and learning how squid does things is great, so... The first thing we must do is to identify our target, if I was to try to get the Android Quadrant Standard apk file out of my squid cache I should first look at my (I'm assuming Debian paths, as always) /var/log/squid/store.log* and look for it. As a hint, if you look for android apks you sould be looking for application/vnd.android.package-archive (which is the mime type they use). You should find something like this: 1366393264.443 SWAPOUT 00 00009545 29D0FF2BA5A2F31424F3C49102C91657 200 1366364657 1339281848 2147368447 application/vnd.android.package-archive 1452291/1452291 GET http://r14---sn-4g57lne7.c.android.clients.google.com/market/GetBinary/com.aurorasoftworks.quadrant.ui.standard/2010100?ms=au&... And in here the fourth entry is the file name under which squid stored the content it has cached, so the path for this file would be /var/spool/squid/sedond_byte_of_name/third_byte_of_name, in this case... /var/spool/squid/00/95/00009545 But you won't even need to compute that, just get that fourth entry (yes, that weird number) and type it on this little script... read filename; file="/var/spool/squid/${filename:2:2}/${filename:4:2}/$filename"; file_length=$(ls -l $file|cut -d " " -f5); content_length=$(head $file|sed -n "s/Content-Length: \(.*\)\r$/\1/p"); dd if=$file bs=$(($file_length-$content_length)) skip=1 of=/tmp/wanted.apk and you'll get at /tmp/wanted.apk the file you wanted. The script can be written on one line without spaces if you want to have it as an alias or whatever, I just inserted the lines to make it more readable on the web. The logic on the script makes the full pathname of the file from the filename you type (followed by return) on its input, then gets the length of that squid file and the content length (which is the length of the apk) from the header of the squid file, then uses these two values to compute the length of the squid header and skips it on dd copying the rest of it to /tmp/wanted.apk. BTW... you may be wondering why the \r$ on the sed expresion... yes, there seems to be a \r at the end of the Content-Length line, don't know why, but it is there, at least on my system That's it!