Spectator-mode Notepad
Or browse all of the entries here.
Today I want to discuss one of the largest problems facing the world of security. Not an external threat, a malicious actor or some new, ground-breaking malware. Today I want to discuss an internal threat, a problem that comes from within:
Sensationalist reporting in Infosec.
Over the past years, reporting in Infosec has started to follow the same general path as reporting in general. Factual information is hidden behind problematic communication. Professionalism has vanished and been replaced by a sort of frantic nervousness that explodes with every new notifcation.
The core of the problem is this: Infosec reporting is driven by the same click-based metrics as all other reporting standards. Funnel traffic to the site, get clicks, get ad-revenue. News sites are operated as businesses, which tend towards poor business practices. The general air of concern about information security amplifies the effect, and we get situations like the two I am going to discuss today.
Earlier this week, another vulnerability was discovered in an Apache commonly used package; Apache Commons Text. The internet froze and collectively shat itself. After all, the last time we experienced this was Spring4Shell. Before that, Log4Shell.
There’s one difference. Neither of the subsequent [word]4shell exploits was as widespread or dangerous as Log4Shell. Log4shell was a criticality of 10.0. And it hit a huge amount of systems. Though not all of them, by a long shot. Only systems running Java could be vulnerable. As someone who’s tech stack involves no Java, I ducked it completely. As I did with Spring4Shell.
The latest issue, 9.8 severity, hardly hit anyone at all. Use of the Apache Commons Text is nowhere near as widespread as Log4j2 use.
This indicates the issue with the current severity metrics in InfoSec reporting. Severity reporting is not a measure of widespread impact. It is a measure of the severity for an affected systems. 10.0 Log4Shell was extremely dangerous on vulnerable machines. Text4Shell at 9.8 is similarly dangerous. But these metrics ignore the simple fact that Log4Shell’s impact dwarved Text4Shell, but the only measureable metric we have would indicated differently.
Cybersecurity reporting needs an update, to include reporting metrics such as potential impact to the internet as a whole, alongside the current severity.
On November 1st, 2022, OpenSSL 3.0.7 will release to patch an issue that exists in 3.0.0 through 3.0.6.
On the surface, this is a completely innocuous notification, exactly what OpenSSL should be doing in this situation.
And then ZDnet stepped in with this blogpost
The tagline, directly below the title is as follows:
We don’t have the details yet, but we can safely say that come Nov. 1, everyone – and I mean everyone – will need to patch OpenSSL 3.x.
Everyone. And I mean everyone. That’s a bold claim. Every single person will need to patch OpenSSL 3.x?
Let’s dive into the post. He covers how bad the issue is, stating the severity of “Critical”. But as we saw before, critical is only critical on systems where OpenSSL 3.x is installed. And based on Steven Vaughan-Nichols’ claims that everyone needs to patch, that should be everywhere, right?
He then goes on to cite various other exploits, including HeartBleed from 2014 (severity 7.5, yet somehow the worst OpenSSL experienced. Call for new metrics, much?)
The fact is, most of this post is bait. Intended to drive traffic, not to be impartial information. It is business. Not news.
The post does, eventually, get to the point that only OpenSSL 3.0.0 through 3.0.6 are affected, which is the most important factor to consider.
Ubuntu, the most popular distribution of the Linux operating system, only switched to OpenSSL 3.x in the 22.04 LTS release. LTS releases are intended for a 5 year lifecycle, and are the most commonly used for business purposes. 18.04 and 20.04, the other LTS versions currently available (which won’t be out of support until 2023 and 2025 respectively) run OpenSSL 1.1.1 by default and are subsequently unaffected. Ubuntu 22.04 was released, as the number may suggest, in April of 2022.
Red Hat Enterprise Linux shows a similar story. RHEL 9.0, the first to default to OpenSSL 3.x as the installation candidate, was released in May of this year.
What this means, is that unless you’re upgrading your production systems within the first 6 months of release of a new operating system, you are going to be fine. Assuming you haven’t manually installed 3.x for whatever reason, this “everyone must patch” vulnerability will completely pass you by.
Do you know anyone who patches Long Term Support systems within the first 6 months of a release, when they have literal years to do so? Those on 18.04 will likely be pushing first to 20.04, or upgrading next year, and those on 20.04 have 3 years before they need to consider upgrades.
All Steven Vaughan-Nichols has achieved is to degrade infosec reporting further. He hooks people in with a frightening warning, repeats dire news from the past and, almost as an afterthought, points out that the issue is likely going to be less severe than was suggested at the opening of the post, by the man himself.
Keep Calm and Carry On has been replaced with Incite Panic and Retract
I got bored and decided to have a quick browse of my web server’s access logs.
GET /remote/fgt_lang?lang=/../../../..//////////dev/cmdb/sslvpn_websession HTTP/1.1
This one is a web traversal attack attempt. From what I can see it is intended for Fortios SSL VPNs. Which my server does not run. More info:
https://gist.github.com/code-machina/bae5555a771062f2a8225fd4731ae3f7
Then we have another directory traversal attack:
/cgi-bin/.%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/etc/hosts HTTP/1.1
This one is directed at Apache. Which my server does not run. Details:
https://blogs.juniper.net/en-us/threat-research/apache-http-server-cve-2021-42013-and-cve-2021-41773-exploited
The larger log entry seems to be part of zgrab, a tool for fast scanning web infrastructure.
\x05\x01\xF4GY\xB3\xBF\x169\x88\xD0\x92$\xD5<\x89
144.126.214.96 - - [30/Sep/2022:01:50:29 +0000] "\x05\x01\xF4GY\xB3\xBF\x169\x88\xD0\x92$\xD5<\x89" 400 157 "-" "-"
144.126.214.96 - - [30/Sep/2022:01:50:30 +0000] "GET /ab2g HTTP/1.1" 404 125 "-" "Mozilla/5.0 zgrab/0.x"
144.126.214.96 - - [30/Sep/2022:01:50:31 +0000] "GET /ab2h HTTP/1.1" 404 125 "-" "Mozilla/5.0 zgrab/0.x"
144.126.214.96 - - [30/Sep/2022:01:50:35 +0000] "GET / HTTP/1.1" 200 725 "-" "Mozilla/5.0 zgrab/0.x"
144.126.214.96 - - [30/Sep/2022:01:50:35 +0000] "GET / HTTP/1.1" 400 255 "-" "Mozilla/5.0 zgrab/0.x"
Another interesting set of attack attempts. Some obfuscated directory traversal again, this time targetting PHP, which has never been on my box.
161.35.188.242 - - [30/Sep/2022:00:26:30 +0000] "GET / HTTP/1.1" 400 157 "-" "-"
161.35.188.242 - - [30/Sep/2022:00:26:58 +0000] "GET / HTTP/1.1" 200 1329 "-" "l9tcpid/v1.1.0"
161.35.188.242 - - [30/Sep/2022:00:27:00 +0000] "PUT /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1" 404 125 "-" "Go-http-client/1.1"
161.35.188.242 - - [30/Sep/2022:00:27:02 +0000] "GET /cgi-bin/.%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/etc/hosts HTTP/1.1" 400 157 "-" "-"
161.35.188.242 - - [30/Sep/2022:00:27:02 +0000] "GET /cgi-bin/.%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/etc/hosts HTTP/1.1" 400 157 "-" "-"
161.35.188.242 - - [30/Sep/2022:00:27:03 +0000] "GET /.DS_Store HTTP/1.1" 404 125 "-" "Go-http-client/1.1"
https://www.cvedetails.com/cve/CVE-2017-9841/
All of these attacks are shots in the dark, random attacks shot at a random system (or likely hundreds, thousands, a hundred-thousand random systems) with the hope that something might stick.
We’ve exited the age of reconnaissance, and entered the age of high volume, the age of randomly (or methodologically) targetted attacks.
It has become more effective to start by throwing the kitchen sink.
After the disappointment of the first day, I came to the table bright-eyed and bushy tailed for the second day which was much the same. Though it’s not a particularly bad thing that the group is filled with Finns. This is Finland, and it makes sense that Finns would comprise the majority of the group. It’s just not what I was expecting.
When I dropped my expectations, things seemed better. The group is made of generally sensible people, myself excluded. There’s a wide range of both security knowledge and general experience, which is certainly a more diverse group than 35 supernerds sitting in the same room.
There are downsides, too. The first class is about security management systems. Which I maintain professionally. I was given the option to have the course recognized as prior learning, though I don’t think I’ll take it. I don’t know everything about the idea of management systems. I’m a little concerned that it’ll end up like my Bachelor’s Degree, where I didn’t learn anything, just had things I knew re-inforced.
But the positives outweigh it. Not only the schooling, the idea of getting a slightly higher education, but also the monthly excuse to travel. To spend a couple of nights in a city instead of my small town. To ride a train. I like trains.
It’s fine to be a student again.
I better get started on my thesis.
I tend to prefer to study in English. This is because I work in IT, and English is a generally accepted languaged used worldwide for any work in this sector. Regardless of the country, company, or the area they serve, people speak English. Except when they’re German, for whatever reason.
In my Bachelor’s Studies, my class was mostly foreign speakers come to study in English. There were a few native Finnish speakers, but the class was mostly made up of students from various Asian or African countries. Chinese, Nepalese, Nigerian, Ethiopian students were common, and the environment was rich and multi-cultural.
Now I’ve started my Master’s Studies, another curriculum that is intended to be taught in English. Yet somehow, the class is 90% Finnish men.
This is not the kind of environment I was expecting. But we’ll see how things go as we progress.
I did it!
I managed to excise that damn Hacker theme, finally. Now I have a new theme. With a weird bar at the top from the original theme’s download section but it’s movement.
Edit: I managed to remove the bar by adding some over-ride CSS.
Now I’m happy.
Though I was still stupid, commiting all these changes to Master. Again. After realising that was a mistake the first time. And now it worked, so it encourages me not to be sensible in the future too.
But that’s a problem for future me.
Present me is pleased with my new layout.
After breaking everything, I figured a simple way to force the git history back to the last known good commit.
It is now functioning again. Remind me not to try changing the theme.
Running locally turned out to be a bit more trouble than expected, too. Lots of problems installing themes and getting pages to display in even a vaguely sensible fashion.
I think this whole thing is a problem with my misuse of GIT. See, this sort of thing would have made sense to roll out to a Staging Branch first. But not me, I’m the Rambo of computers. I push straight to Master!
Because I’m dumb.
If I attempt this again, and despite my warnings not 12 seconds ago, I probably will, and soon, I should definitely create an additional branch to work on.
I learned something about Jekyll, and about Gemfiles that might help. If I’m right, replacing the theme in the Gemfile and in the _config.yml file should work.
And if I’m wrong, assuming I use branching properly, it won’t matter.
So a reset proved more complicated than expected, as they always seem to with my favourite distro.
I run Debian. I like it because I consider it secure. Safe.
And yet, every time I attempt… well, pretty much anything, I run into issues with versions of assorted software being far out of date.
This time it was ruby, which is running at 2.5.5 on Debian, with 2.7.6 the latest stable 2.x available, and 3 having been released up to 3.1.2.
Last time it was Python, with python 3.9 unavailable on Debian.
And to be honest, this happens every time.
I’ll spin up an Ubuntu VM on my Proxmox server and start again from there.
The instructions for running Jekyll in Github pages is somehow quite… bad. Following the instructions exactly gives you a barely functioning website, and attempting to change the theme through Githubs theme changer breaks functionality completely.
I’m wondering why I started with github pages in the first place. I used to host my blog on my server.
I could do that again.
I could also host it locally and just reverse proxy from my server through to the local website.
Ehh, didn’t work. The commits are too convoluted to know exactly where I messed up that made changing the theme impossible.
I’m going to try to fully rebuild.
When I put my blog together, I attempted to screw around with the theme. I don’t know much about Ruby, or Jekyll, or themes. It did not go well. Not even slightly. But after some wrestling, I finally got it working.
Today I felt like changing theme, which thanks to being hosted at github pages, should have been as easy as a single click. It was not.
Due to trying to change the theme, I’d somehow deeply embedded the theme in the site, and every attempt to change it… well… failed pretty badly.
The first few failed to yield any results.
The last removed any kind of theme from the page completely.
So now I get to try to fix everything.
I could try to revert to a sensible theme and then re-push the blogs.
Which I might try.