CentOS 8 End of Life: upgrade to CentOS Stream

CentOS 8 End of Life has been effective since January 31 2021, official mirrors do not provide any packages anymore. Here is how to upgrade to latest release of CentOS 8 and switch to Stream.

As always, prior to any system change, you should ensure you have a working recent backup.

Upgrade to latest CentOS 8

sed -i -e 's/mirrorlist/#mirrorlist/g' -e 's|#baseurl=http://mirror.centos.org|baseurl=https://vault.centos.org|g' /etc/yum.repos.d/*.repo

yum update

reboot

cat /etc/centos-release
CentOS Linux release 8.5.2111

Make sure everything is working as expected.

Switch to CentOS Stream

sed -i -e 's/mirrorlist/#mirrorlist/g' -e 's|#baseurl=http://mirror.centos.org|baseurl=https://vault.centos.org|g' /etc/yum.repos.d/*.repo

dnf install centos-release-stream

dnf swap centos-linux-repos centos-stream-repos

dnf distro-sync

reboot

cat /etc/centos-release
CentOS Stream release 8

Make sure everything is working as expected.

You’re done!

Posted in CentOS, Computer, Linux | Leave a comment

Elasticsearch in Docker: threat intelligence with filebeat

Goals:

  • collect observables from supported feeds
  • collect observables from unsupported feeds with elastic-tip

Setup elasticsearch and kibana for filebeat

We could use superuser elastic to setup filebeat but we are going to use a dedicated user with just the minimum permissions.

Open Kibana and go to Stack Management > Security > Roles. Click Create role and enter the following settings:

  • Role name: filebeat_threatintel_setup
  • Cluster privileges: monitor, manage_ilm, manage_ml
  • Index privileges:
  • Indices: filebeat-*
  • Privileges: manage, write, read

Click Create role.

Go to Stack Management > Security > Users. Click Create user and enter the following settings:

  • Username: filebeat_threatintel_setup
  • Privileges: filebeat_threatintel_setup, kibana_admin, ingest_admin, machine_learning_admin

Click Create user.

Now let’s setup the index, index templates, dashboards & so on. We do that by running filebeat setup once. We attach it to elastic network, pass it root CA, username and password of the user we just created, and the index name and policy.

⚠️ One important thing to know is: when you run filebeat setup command, it will import ALL available dashboards, even those you do not care about, even if you specify --modules on the command line. You can find several posts and issues on the subject:

If you just want to load the threat intel dashboards, you need to make all the other dashboards unavailable to filebeat setup. You can either download the dashboards from github and save them in a directory named dashboards or run the following commands:

# in a first terminal
docker run -it --rm --name ti docker.elastic.co/beats/filebeat:7.16.3 bash

# in a second terminal
mkdir dashboards

for i in Filebeat-threatintel-abuse-url.json Filebeat-threatintel-anomali.json Filebeat-threatintel-malwarebazaar.json Filebeat-threatintel-overview.json Filebeat-threatintel-alienvault-otx.json Filebeat-threatintel-aubse-malware.json Filebeat-threatintel-misp.json Filebeat-threatintel-recordedfuture.json ; do docker cp ti:/usr/share/filebeat/kibana/7/dashboard/$i dashboards/ ; done

When you have copied the file, you can exit from the first container.

Now run the filebeat setup command:

docker run --rm \
-v elastic_certs:/certs:ro \
--network elastic_default \
docker.elastic.co/beats/filebeat:7.16.3 setup \
-E output.elasticsearch.hosts=["https://es:9200"] \
-E output.elasticsearch.ssl.certificate_authorities=/certs/ca.crt \
-E output.elasticsearch.username=filebeat_threatintel_setup \
-E output.elasticsearch.password=password \
-E setup.kibana.host=kibana:5601 \
-E setup.kibana.protocol=https \
-E setup.kibana.ssl.certificate_authorities=/certs/ca.crt \
-E setup.kibana.username=filebeat_threatintel_setup \
-E setup.kibana.password=password \
-E setup.ilm.policy_name="7-days-default"

Send data to Elasticsearch

Create another user with just enough permissions to send data to Elasticsearch. Open Kibana and go to Stack Management > Security > Roles. Click Create role and enter the following settings:

  • Role name: filebeat_threatintel_writer
  • Cluster privileges: monitor, read_ilm, read_pipeline, manage_ingest_pipelines, manage_index_templates
  • Index privileges:
  • Indices: filebeat-*
  • Privileges: create_doc, view_index_metadata, create_index

Click Create role.

⚠️ the documentation says to only provide cluster privileges monitor, read_ilm and read_pipeline. However, if you do not provide manage_ingest_pipelines and manage_index_templates, you will encounter connection issues.

Go to Stack Management > Security > Users. Click Create user and enter the following settings:

  • Username: filebeat_threatintel_writer
  • Privileges: filebeat_threatintel_writer

Click Create user.

Create a file named filebeat.yml with the following content.

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

output.elasticsearch:
  hosts: ${ELASTICSEARCH_HOSTS}
  username: '${ELASTICSEARCH_USERNAME}'
  password: '${ELASTICSEARCH_PASSWORD}'
  ssl:
    certificate_authorities: ["/certs/ca.crt"]

setup.ilm.check_exists: false

monitoring:
  enabled: true
  elasticsearch:
    username: '${MONITORING_USERNAME}'
    password: '${MONITORING_PASSWORD}'

You can find the base file for Docker on github. The complete filebeat.yml reference is available on Elastic website.

Retrieve default threat intel configuration file

docker run --rm docker.elastic.co/beats/filebeat:7.16.3 cat modules.d/threatintel.yml.disabled > threatintel.yml

Edit the file newly created to enable/disable and customize the supported feeds.

Create docker-compose.yml file with the following content:

version: '3'

services:
  filebeat:
    image: docker.elastic.co/beats/filebeat:7.16.3
    restart: always
    env_file:
      - ./.env
    networks:
      - elastic_default
    volumes:
      - ./filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./threatintel.yml:/usr/share/filebeat/modules.d/threatintel.yml:ro
      - elastic_certs:/certs:ro
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 1000M
    memswap_limit: 1000M

networks:
  elastic_default:
    external: true

volumes:
  elastic_certs:
    external: true

Start the container: docker compose up

You should not witness any error and if you go to Kibana, you should see documents in the filebeat index as well as on the various threat intel dashboards.

Posted in Computer, Docker, Linux, Networking, Security | Tagged , , , , , , , | Leave a comment

Elasticsearch in Docker: quick notes

Goals:

  • single node elasticsearch
  • single node kibana
  • password for all accounts
  • https between all components
  • behind traefik
  • future post: collect network logs (routers)
  • future post: collect application logs (web servers, dns servers, docker)
  • future post: collect application metrics
  • future post: correlate with threat intelligence

Create compose file

version: '3'

services:
  es:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    container_name: elastic_es
    restart: always
    env_file:
      - ./.env
    environment:
      ES_JAVA_OPTS: "-Xms2g -Xmx2g"
      node.name: "es"
      discovery.type: "single-node"
      bootstrap.memory_lock: "true"
      # minimal security
      xpack.security.enabled: "true"
      # no encryption on internode communication
      xpack.security.transport.ssl.enabled: "false"
      # https traffic
      xpack.security.http.ssl.enabled: "true"
      xpack.security.http.ssl.key: "${CERTS_DIR}/es.key"
      xpack.security.http.ssl.certificate: "${CERTS_DIR}/es_chain.crt"
      xpack.security.http.ssl.certificate_authorities: "${CERTS_DIR}/ca.crt"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - reverseproxy
      - default
    volumes:
      - data:/usr/share/elasticsearch/data
      - certs:${CERTS_DIR}:ro
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.elastic.rule=Host(`elasticsearch.foobar.com`)"
      - "traefik.http.routers.elastic.service=elastic"
      - "traefik.http.routers.elastic.tls=true"
      - "traefik.http.routers.elastic.tls.certresolver=le"
      - "traefik.http.routers.elastic.entrypoints=websecure"
      - "traefik.http.services.elastic.loadbalancer.server.port=9200"
      - "traefik.http.services.elastic.loadbalancer.server.scheme=https"
      - "traefik.http.services.elastic.loadbalancer.serversTransport=elastic"
      - "traefik.http.serversTransports.elastic.serverName=es"
      - "traefik.http.serversTransports.elastic.insecureSkipVerify=true"
    deploy:
      resources:
        limits:
          cpus: "4.0"
          memory: 4000M
    memswap_limit: 4000M

  kibana:
    image: docker.elastic.co/kibana/kibana:7.16.3
    container_name: elastic_kibana
    restart: always
    depends_on:
      - es
    env_file:
      - ./.env
      - ./.env.kibana
    environment:
      - ELASTICSEARCH_URL="https://es:9200"
      - ELASTICSEARCH_HOSTS=["https://es:9200"]
      # minimal security: defined in environment files
      # kibana has to trust elasticsearch certificate
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES="${CERTS_DIR}/ca.crt"
      # https traffic between other components and kibana
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_KEY=${CERTS_DIR}/kibana.key
      - SERVER_SSL_CERTIFICATE=${CERTS_DIR}/kibana_chain.crt
      - SERVER_PUBLICBASEURL=https://kibana.foobar.com
    networks:
      - reverseproxy
      - default
    volumes:
      - certs:${CERTS_DIR}:ro
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.kibana.rule=Host(`kibana.foobar.com`)"
      - "traefik.http.routers.kibana.service=kibana"
      - "traefik.http.routers.kibana.tls=true"
      - "traefik.http.routers.kibana.tls.certresolver=le"
      - "traefik.http.routers.kibana.entrypoints=websecure"
      - "traefik.http.services.kibana.loadbalancer.server.port=5601"
      - "traefik.http.services.kibana.loadbalancer.server.scheme=https"
      - "traefik.http.services.kibana.loadbalancer.serversTransport=kibana"
      - "traefik.http.serversTransports.kibana.serverName=kibana"
      - "traefik.http.serversTransports.kibana.insecureSkipVerify=true"
    deploy:
      resources:
        limits:
          cpus: "4.0"
          memory: 4000M
    memswap_limit: 4000M

volumes:
  data:
  certs:
    name: elastic_certs
    external: true

networks:
  reverseproxy:
    external: true

Create a file named .env with the following content:

COMPOSE_PROJECT_NAME=elastic
CERTS_DIR=/usr/share/elasticsearch/config/certificates

Create SSL CA and certificates for Elasticsearch and Kibana

Create a volume named elastic_certs, create the certificates using a temporary container and change the ownership:

docker volume create elastic_certs
docker run -it --rm -v elastic_certs:/certs alpine:3.15.0 sh

apk update && apk add openssl && cd /certs

# create the CA
openssl req -x509 -nodes -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -keyout ca.key -out ca.crt -days 3652 -subj "/C=LU/ST=Luxembourg/L=Luxembourg/O=Xentoo/CN=elastic_ca" -extensions v3_ca

# create csr for elastic node
openssl req -nodes -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -keyout es.key -out es.csr -subj "/C=LU/ST=Luxembourg/L=Luxembourg/O=Xentoo/CN=es" -addext "subjectAltName=DNS:es" -extensions v3_req

# sign the certificate
openssl x509 -req -days 3652 -CA ca.crt -CAkey ca.key -CAcreateserial -extfile <(printf "subjectAltName=DNS:es") -in es.csr -out es.crt

# repeat for kibana
openssl req -nodes -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -keyout kibana.key -out kibana.csr -subj "/C=LU/ST=Luxembourg/L=Luxembourg/O=Xentoo/CN=es" -addext "subjectAltName=DNS:kibana" -extensions v3_req
openssl x509 -req -days 3652 -CA ca.crt -CAkey ca.key -CAcreateserial -extfile <(printf "subjectAltName=DNS:kibana") -in kibana.csr -out kibana.crt

# create certificate chains
cat es.crt ca.crt > es_chain.crt
cat kibana.crt ca.crt > kibana_chain.crt

# change ownership of certificates for elasticsearch & kibana
chown 1000:1000 es* kibana*

exit

In a production environment, you should sign the certificates with your existing CA, but for testing, this is enough.

Define passwords for default users

Start elasticsearch node: docker compose up es -d

Set the passwords for the default users:

docker exec -it elastic_es elasticsearch-setup-passwords auto --url https://es:9200 --batch

Save the passwords in a safe location, you will need them later to connect the various components to Elasticsearch.

Configure Kibana

Create a file named .env.kibana and with the following content, use the password from previous step:

ELASTICSEARCH_USERNAME=kibana_system
ELASTICSEARCH_PASSWORD=password

Start kibana container: docker compose up kibana -d

At this point, you should be able to connect to Kibana with username elastic.

Posted in Computer, Docker, Linux, Networking | Tagged , , , , , , , | Leave a comment

Traefik reverse-proxy with ModSecurity

Traefik itself does not include WAF capabilities. If you want to add this capability, you can opt to replace Traefik with Apache httpd or nginx coupled with ModSecurity, however you loose the autoconfiguration of Traefik.

Fortunately, Alexis Couvreur has developed a ModSecurity plugin for Traefik to forward requests received by Traefik to another webserver (running ModSecurity) before actually forwarding the requests to the application server. If the ModSecurity webserver returns a code > 400, then Traefik will reject the request, otherwise it will forward it to the application server.

The suggested setup uses owasp/modsecurity-crs image for ModSecurity and since this can act as a reverse proxy, it uses the well known containous/whoami image as backend, since it is lightweight and always return a 200 status code.

The setup I decided to use is the identical with the addition of SSL between the components, and multiple WAF containers depending on their intended use (paranoia level, detection only, different rules, etc.).

SSL certificate

Let’s first create the SSL certificate. Since this is a test environment, a self-signed certificate is fine. For production use, I recommend signing the certificate with your existing CA. The common name matches the value of environment variable SERVER_NAME. v3_req extensions are included to generate a server certificate instead of a CA certificate.

openssl req -x509 -nodes -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -keyout server.key -out server.crt -days 3650 -subj "/C=US/ST=California/L=Log Angeles/O=Foobar/CN=waf" -addext "subjectAltName=DNS:*" -extensions v3_req

Traefik needs to trust this certificate, so we need to create a custom image. Create a Dockerfile in traefik directory with the following content:

FROM traefik:2.5.7

ADD server.crt /usr/local/share/ca-certificates/server.crt

RUN update-ca-certificates

In your docker-compose.yml file, replace the image: traefik:2.5.7 with build: traefik and build the container image with docker compose build traefik.

ModSecurity plugin

Since it uses a Traefik plugin, you will need a Traefik Pilot token. Once you have a token, pass it to your Traefik container using environment variable PILOT_TOKEN. I use a .env file with the following content:

PILOT_TOKEN=token

Then, add the following items to the command of your Traefik container:

- "--pilot.token=${PILOT_TOKEN}"
- "--experimental.plugins.traefik-modsecurity-plugin.modulename=github.com/acouvreur/traefik-modsecurity-plugin"
- "--experimental.plugins.traefik-modsecurity-plugin.version=v1.0.3"

Then add the following label to your Traefik container:

- "traefik.http.middlewares.waf.plugin.traefik-modsecurity-plugin.modSecurityUrl=https://waf:443"

WAF service

Let’s add the WAF service:

  waf:
    image: owasp/modsecurity-crs:apache
    environment:
      - PARANOIA=1
      - ANOMALY_INBOUND=10
      - ANOMALY_OUTBOUND=5
      - BACKEND=https://dummy
      - SERVER_NAME=waf
    volumes:
      - ./server.key:/usr/local/apache2/conf/server.key:ro
      - ./server.crt:/usr/local/apache2/conf/server.crt:ro

Notice the BACKEND variable matches the dummy container name.

Dummy service

The suggested configuration uses containous/whoami but I have decided to use nginx. The main reason is stability: I have had some issues with containous/whoami, sometimes it crashed with no apparent reason. We are going to replace nginx default configuration file with our own and pass it the SSL certificate:

  dummy:
    image: nginx:latest
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./server.key:/certs/server.key:ro
      - ./server.crt:/certs/server.crt:ro

Paste the following in a file named nginx.conf:

user  nginx;
worker_processes  auto;

error_log  stderr notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    server {
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        return 200 'OK\n';
        access_log off;

        ssl_certificate /certs/server.crt;
        ssl_certificate_key /certs/server.key;
        ssl_session_timeout 1d;
        ssl_session_cache shared:MozSSL:10m;
        ssl_session_tickets off;
        ssl_protocols TLSv1.3;
        ssl_prefer_server_ciphers off;
        add_header Strict-Transport-Security "max-age=63072000" always;
    }
}

This configuration enables SSL and makes nginx reply with a status code 200 whatever the request is.

Restart the Traefik stack using docker compose up -d.

Add WAF to an app

Edit docker-compose.yml for an app you want to protect and add the following label:

- "traefik.http.routers.myapp.middlewares=waf@docker"

Restart the app stack.

Validate ModSecurity rules

Call your app normally first, you should not experiment errors.

Then add ?test=../ to the URI, and you should receive a status code 403 Forbidden.

Adding more WAF services

We now have seen how to have one WAF container. To add another WAF container, you need to:

  1. in docker-compose.yml, copy/paste the waf service, rename it to your liking (eg: wafparanoia4) and adapt the environment variables (eg: PARANOIA=4)
  2. in docker-compose.yml, add a new HTTP middleware matching your new WAF service (eg: wafparanoia4) to Traefik pointing to this new WAF container
  3. in docker-compose.yml of the app to be protected by this new WAF service, add a label to use the middleware you just created
  4. restart your Traefik stack
  5. restart your app stack
  6. profit!

Some thoughts

Since the request is forwarded to a dummy container, only the request is actually analyzed. If the requests passes WAF checks, it will go to your app server. Then, if the response of your app contains something that would be blocked by WAF, here it will not.

If you need to analyze the response as well, then I think you should not use this plugin but add a container owasp/modsecurity-crs in the app stack and use this as backend of Traefik.

Another solution could be to use a single owasp/modsecurity-crs container and overwrite the config file conf/extra/httpd-vhosts.conf to specify your own backends.

That’s it folks.

Posted in Computer, Docker, Linux | Tagged , , , , , , , , | Leave a comment

Backup gitea container

Gitea is great when you want a fast, light and yet user-friendly git repositories. Alternatives would be Gogs, Gitlab or even Github.

Gitea documentation tells you to use docker exec to perform a backup. However, this prevents you from using an additional volume to dump the backup into.

Instead, I prefer to use a similar command using docker run. Assuming the following:

  • the container network is called gitea_default, you only need this if you use an external database such as MySQL
  • the container is called gitea
  • the backup directory is in the current directory and named backups
docker run --rm -it --network gitea_default --volumes-from gitea --volume $(pwd)/backups:/backups --user git --workdir /backups --entrypoint '/app/gitea/gitea' gitea/gitea:1.15.10 dump -c /data/gitea/conf/app.ini

Posted in Computer, Linux | Tagged , , , , , | Leave a comment

Applying Audit Policies

If like me, you are trying to enable Audit Policies on Windows computers in a domain using Local Policies > Audit Policy, and it does not work, then you came to the right place.

Legacy Audit Policy: audit object access settings in Local Security Policy

The reason is: that is the legacy way to configure Audit Policies. Like Windows XP legacy.

You will find plenty of resources out there telling you this is because Advanced Audit Policy is enabled and you need to disable it by setting Local Policies > Security Options > Audit: Force audit policy subcategory settings to override audit policy category settings to Disabled to make it work. While it is true that disabling the Advanced Audit Policy will make it work, it will revert to the old, non-granular way of configuring Audit Policies.

You are now supposed to use Advanced Audit Policy Configuration. And by now, I mean since Vista.

Instead of setting Audit Object Access to Success and/or Failure, you can now granularly enable which object type you want to audit: file shares, file system, registry, …

In your GPO or Local Security Policy, scroll down at the bottom of the list and you will see a dedicated folder called Advanced Audit Policy Configuration with many categories, and in each of them, many settings you can now control independently.

Advanced Audit Policy: items in the Object Access category

Now if you apply it using gpupdate /force and you check it using auditpol /get /category:* , you should see a change in the individual items.

As a reminder, you can check which GPO is applying what setting using gpresult /h report.html . You need to be an Administrator to view the Computer configuration.

Posted in Computer, Microsoft | Tagged , , , , | Leave a comment

A Raspberry Pi, a UPS and a couple of ESXi servers walk into a bar

If you have the power of multiple servers connected to a UPS, you probably need to shut them down when the power goes down and before the UPS runs out of juice. Unless your UPS can be connected to the network, you usually can only connect a single device to it using good old serial or brand new USB. That single host now knows about the UPS status, but what about all the other systems? That’s when Network UPS Tools, aka NUT, comes into play.

NUT comes with a server and a client. You install the server on the device connected to the UPS using serial or USB (or even the network). You install the client on all the other devices.

We will install the server on the Raspberry Pi and the client on the ESXi servers.

Raspberry Pi

I will assume the connection is USB. On the raspberry pi, run the following as root:

apt-get install nut nut-client nut-server
nut-scanner -q -N -U > /etc/nut/ups.conf
echo "LISTEN 0.0.0.0 3493" > /etc/nut/upsd.conf
MONITOR nutdev1@localhost 1 master s3cr3tp4ssw0rd master

Write the following into /etc/nut/upsd.users:

[master]
    password = s3cr3tp4ssw0rd
    actions = SET
    instcmds = ALL
    upsmon master
 
[esxi]
    password = s3cr3tp4ssw0rd

Restart all services:

systemctl restart nut-driver
systemctl restart nut-server
systemctl restart nut-client
systemctl restart nut-monitor

ESXi hosts

Download the binaries from rene.margar.fr/2012/05/client-nut-pour-esxi-5-0/ and copy them to your ESXi host(s).

Configure the host to accept community packages: esxcli software acceptance set –level=CommunitySupported

Extract the file: tar -xzvf NutClient-ESXi-<version>.tar.gz

Install the package: ./upsmon-install.sh

Edit advanced system settings and set the following variables (at least):

  • /UserVars/NutUpsName : nutdev1@raspberrypi-ip-address
  • /UserVars/NutUser : esxi
  • /UserVars/NutPassword : s3cr3tp4ssw0rd

You also need to specify how long the ESXi host will wait before it shuts itself down with the following variable:

  • /UserVars/NutFinalDelay : 5 (default value)

If you want email alert, then configure the following variables as well:

Then, go to the services in the Web UI, edit the startup policy to “start and stop with the host” and start the service immediately.

Validate the setup

On the Raspberry Pi, use tcpdump to capture packets on port 3493, you should see your ESXi hosts talk with the NUT server asking for the UPS status, and the Raspberry Pi answering.

In addition, you should perform a real test by unplugging the power supply of the UPS and check that the ESXi hosts shut themselves down. You will probably want to tune the variable NutFinalDelay based on your UPS capacity and load.

Links

Posted in Computer, Linux, Networking | Tagged , , , , , | Leave a comment

Running a PKI using Smallstep certificates with Docker

Recently, I had to set up a new PKI. I was going to go with the good old OpenSSL but it’s 2021, there must be a more userfriendly and, more importantly, automated approach.

There are many open-source possibilities: EJBCA, cfssl, Hashicorp Vault, Smallstep Certificates. I chose to use Smallstep certificates because it has all the features I need and they are not behind a pay-wall:

  • lightweight: small Go binary, you can run it with a file-based database (similar to SQLite)
  • user friendly CLI: compared to openssl commands
  • ACME protocol: useful for Traefik reverse proxy
  • OIDC authentication
  • support: the guys are super friendly and available on their Discord channel
Continue reading
Posted in Computer, Linux, Uncategorized | Tagged , , , , | Leave a comment

Computer case: Antex NX800 mounting tips

If you plan to buy the Antec NX800 for your new build, you should be aware of a couple of things.

First, it is one of the cheap-ish cases that support a 280mm radiator at the top. This is the primary reason I bought this case.

Second, if you mount a radiator at the top, mount it last. Especially, mount it after you screwed the motherboard and plugged all cables (especially CPU power and fans). Accessing them with the radiator mounted will be difficult or even impossible.

Finally, while you can turn on/off the RGB LEDs on the fans with the push of a button, you cannot do the same with fan speed. Fans connected to the controller will run at max speed, and some may find it quite loud.

Apart from that, the case seems solid and will most likely survive many builds. Enjoy.

Posted in Computer, Hardware | Tagged , , , , , | Leave a comment

Tango Luxembourg using private IP addresses for Fiber internet access

When I moved in Luxembourg, I subscribed to Tango Luxembourg Fiber internet access. Back then, I got the usual dynamic public IP address “for free”. It was changing every 36 hours but at least it was a public one.

Recently, I changed my subscription to the 1 gigabit/s offer and soon after, I realized my VPNs and 6to4 tunnel was not working anymore.

After a brief troubleshooting session, I found out I was receiving a private IP address instead of the usual public 94.252.x.x .

A bit of googling later and I found out I was not the only one complaining about it:

Before I switched, I had read their service descriptions and I did not find any mention of it, in any document. Their offer page does not explicitly mention it, they even go as far as say:

No hidden conditions. Once you have chosen your connection speed, surf and download without limit.

Their service description however mentions that “dynamic public IP address” is optional, but you have to look for it.

Honestly, I have to say I am disappointed by such a poor customer service. I guess that is the world we live in now.

Anyway, new customers beware: if you want/need a public IP address, you will have to pay for it.

Posted in Computer, Luxembourg, Networking | Leave a comment