136 private links
Delta provides language syntax-highlighting, within-line insertion/deletion detection, and restructured diff output for git on the command line.
A cat(1) clone with syntax highlighting and Git integration.
bat supports syntax highlighting for a large number of programming and markup languages
Git integration
Show non-printable characters
Automatic paging
File concatenation
Dass die Tiroler Behörden nicht sofort reagiert hätten, als sie Ende Februar vom Gesundheitsministerium über die Vorgänge in Ischgl informiert wurden, beweise aus Sicht von FPÖ-Landesparteiobmann Markus Abwerzger das absolute Versagen der Tiroler Landesregierung. "Wann räumen Bernhard Tilg und Landessanitätsdirektor Franz Katzgraber endlich ihre Plätze, nachdem was sie in den vergangenen Monaten alles angerichtet haben? Sie sollen endlich gehen und Verantwortung übernehmen, da braucht es keine Kommissionsberichte des Tiroler Landtages“, so Abwerzger.
In der Nacht auf Dienstag erfolgte in der Länderkammer dann auch der Einspruch gegen Änderungen des Epidemiegesetzes, die im Nationalrat beschlossen worden waren. Sowohl SPÖ als auch FPÖ befürchteten eine Einschränkung der Grund- und Freiheitsrechte.
Gesundheitsminister Rudolf Anschober (Grüne) hat heute im Rahmen einer Pressekonferenz erklärt, dass die App eine „freiwillige Variante“ der Kontaktverfolgung bleiben werde. Auch Innenminister Karl Nehammer (ÖVP) sagte: „Freiwilligkeit“ sei „hier das Gebot“.
Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality.
Google cloud native adherent and evangelist Kelsey Hightower
The proc filesystem is an important feature of Linux that you can't ignore. proc is a pseudo or virtual filesystem that provides an interface to kernel data structures. In other words, proc isn't an actual filesystem in the real-world sense; rather, it resides only in memory and not on a disk. It is automatically mounted by the system.
Most of its contents are regular files and directories, so you can use most regular Linux tools to navigate the proc filesystem. The examples in this article should run the same on any Linux distribution.
Did you ever want to match a regex, but all you had was a fat32 driver? Ever wanted to serialize your regex DFAs into one of the most widely supported formats used by over 3 billion devices? Are directory loops your thing?
Worry no more, with regex2fat this has become easier than ever before! With just a little regex2fat '[YOUR] F{4}VOUR{1,7}E (R[^E]G)*EX HERE.' /dev/whatever, you will have a fat32 regex DFA of your favourite regex. For example, to see whether the string 'Y FFFFVOURRE EX HEREM' would match, just mount it and check if '/Y/SPACE/F/F/F/F/V/O/U/R/R/E/SPACE/E/X/SPACE/H/E/R/E/M/MATCH' exists.
If you’re trying to learn Docker you will first have to master its various terminal commands. This guide aims to help you get started with basic docker commands.
systemd has become a mainstay for the Linux world, but one of the things that still seems to stick around is cron jobs. It’s understandable, as cron is a tool that we have been using for a long time. Change is hard, but I think systemd Timers make the change well worth it. Here are a few reasons why…
A large majority of computer systems have some state and are likely to depend on a storage system. My knowledge on databases accumulated over time, but along the way our design mistakes caused data loss and outages. In data-heavy systems, databases are at the core of system design goals and tradeoffs. Even though it is impossible to ignore how databases work, the problems that application developers foresee and experience will often be just the tip of the iceberg. In this series, I’m sharing a few insights I specifically found useful for developers who are not specialized in this domain.
I've now learned that grep can, halfway through grepping in a file, think
the file is suddenly binary and stop returning results.
xsv is a command line program for indexing, slicing, analyzing, splitting and joining CSV files. Commands should be simple, fast and composable:
Simple tasks should be easy.
Performance trade offs should be exposed in the CLI interface.
Composition should not come at the expense of performance.On any given day, we handle around 15% of daily retail trading volume across all stock exchanges in India. Billions of requests generated in the process are handled by a suite of systems we have built in-house. Also, we are very particular on self-hosting as many dependencies as possible, everything from CRMs to large databases, Kafka clusters, mail servers etc.
To aid these primary systems, there are a large number of ancillary workloads that run, covering everything from real-time trades, document processing, KYC, and account opening, legal and compliance, complex, large scale P&L and number crunching, and a wide range of backoffice workloads. The systems are spread across a hybrid setup; physical racks across two different data centres (where exchange leased lines terminate) and AWS. All of this means that we have a lot of dynamic workloads and dissimilar systems and environments, bare metal to Kubernetes clusters, to be monitored independently.
The first and second open source migration waves were periods of rapid expansion for companies that rose up to provide commercial assurances for Linux and the open source databases, like Red Hat, MongoDB, and Cloudera. Or platforms that made it easier to host open source workloads in a reliable, consistent, and flexible manner via the cloud, like Amazon Web Services, Google Cloud, and Microsoft Azure.
This trend will continue in the third wave of open source migration, as organizations interested in reducing cost without sacrificing development speed will look to migrate more of their applications to open source. They’ll need a new breed of vendor—akin to Red Hat or AWS—to provide the commercial assurances they need to do it safely.
I’ve been writing about running Docker on Raspberry Pi for five years now and things have got a lot easier than when I started back in the day. There’s now no need to patch the kernel, use a bespoke OS, or even build Go and Docker from scratch.
Ncdu is a command line tool to view and analyse disk space usage on linux. It can drill down into directories and report space used by individual directories. This way it is very easy to track down space consuming files/directories. It actually allows the user to do this much faster than even a gui file manager. On the server ofcourse gui tools are not present.
The decision in 2017 to move back to a monolith considered all the trade-offs, including being comfortable with losing the benefits of microservices. The resulting architecture, named Centrifuge, is able to handle billions of messages per day sent to dozens of public APIs. There is now a single code repository, and all destination workers use the same version of the shared library. The larger worker is better able to handle spikes in load. Adding new destinations no longer adds operational overhead, and deployments only take minutes. Most important for the business, they were able to start building new products again. The team felt all these benefits were worth the reduced modularity, environmental isolation, and visibility that came for free with microservices.
SSHHeatmap
Generates a heatmap of IPs that made failed SSH login attempts on linux systems, using /var/log/auth.log to get failed attempts. Uses the ipinfo.io library to fetch the IP address coordinates, and folium to generate the heatmap
The xpipe command reads input from stdin and splits it by the given number of bytes, lines, or if matching the given pattern. It then invokes the given utility repeatedly, feeding it the generated data chunks as input.
You can think of it as a Unix love-child of the split(1), tee(1), and xargs(1) commands.
It's usefulness might best be illustrated by an example. Suppose you have a file 'certs.pem' containing a number of x509 certificates in PEM format, and you wish to extract e.g., the subject and validity dates from each.
The openssl s_client(1) utility can only accept a single certificate at a time, so you'll have to first split the input into individual files containing exactly one cert, then repeatedly run the s_client(1) command against each file.
And, let's be honest, you probably have to google how to use sed(1) or awk(1) to extract subsequent blocks from a flip-flop pattern.
xpipe(1) can do the job for you in a single command: