What is firstlab testing option 7 for

exoz.net

Why am I doing this - or - Content is king

Since the early days when VR was nothing more than two tiny cathode tubes firing their β-rays directly at your eyes, I was a VR enthusiast.
Well it wasn't actually firing directly at your eyes. There was an also tiny screen involved that offered a resolution of no more than 320x240 for most devices.

But what was true for those unquestionable experimental first devices, is still true for today:
The acceptance of these devices does not depend so much on the quality of the hardware, but on the availability of content that really makes a difference.
If software and especially games do not deliver an experience that you just can't have at a desktop, then VR still is nothing more than a high price gimmick.
Today's VR headsets already deliver a stunning resolution of above 1600x1600 pixels per eye, in which you can get perfectly lost in what you're seeing - if what you are experiencing is something that a flat 2D screen (or even your high price 3D monitor that feels more or less like an aquarium) just can't provide.

Ok, so much for my enthusiasm in this. Good VR content that entertains you for longer than 5 minutes per game is rare and I have a feeling as if the big content makers are waiting for almost everybody to have a VR headset at home to really start working for these devices - totally ignoring the fact , that almost no one will buy a headset if there isn't anything to do with.

We really need to get started in creating VR content.
So i finally stared to try rendering VR content in blender.

While 360 ​​° pictures are nice to look at, especially if you want to have the feeling of really standing in some interesting place and being able to look around freely, 360 ° vision is not that great if you want to visualize something like a 3D model .
You’d always have to have something to look at behind you for the rare occurrence, that the user fully turns around.
Providing a real 360 ° vision in 3D is even harder because you can't just take like four 3D shots of some environment and stitch that together. You'll get weird 3D stitching artifacts that will break your immersion. This is at least true for real videos that are shot with real lenses.
There are ways to create a correct 360 ° 3D images and videos in blenderscycles renderer, but I'm not going to cover that here for the one reason that motivated me to create VR180 content with blendereevee:

Blender cycles is horribly slow!

The problem with rendering video content for VR is that you don’t prepare exactly what the user will see, instead you’ll need to prepare everything that the user may be looking at.
The VR180 video format puts some constrains on the users freedom to move his head, that have proven to be acceptable.

You'll still have the immersion from being able to direct you vision by slightly moving you head, but you're forced to stay within the 180 ° that are in front of the camera.

This enables the content creators to really focus on some specific setting on the one hand, on the other hand creators can now really record such scenes with a physical camera, without having to apply some weird hacks to remove themselves and everything else behind the camera from the scene.

Especially you can record such a scene with just two physical or virtual lenses.
The fact that you can’t really turn your head 180 ° while watching such a video quite nicely masks the errors that you’ll get in your 3D projection because the two lenses are always pointing to the front.

This is also true for your eyes. If you could somehow look 90 ° to the left without turning your head for example, your eyes would line up with the left eye being in front of the right eye, providing no 3D vision at all.

The angle you're free to move your head is effectively 180 ° minus the field of vision of your headset. This will be something around 60 ° for most devices.

So much for the basics - let's get back to blender

The reason why we probably don't want to use cycles is that the resolution we're going to need is laughably high. 4k is already the lowest resolution you’ll probably want to use, because it needs to hold all the 180 ° of vision per eye. Projecting both eyes into one frame effectively divides that resolution into halves for one direction of your choice, which will typically be horizontal.
So we’d chop our field of vision out of the already halved resolution and then reproject it for the eyes.

In the end 4k will leave us with a resolution that is already lower than what could possibly be displayed per eye on today’s devices.

That’s why 5.7k (5760x2880) became the de-facto standard for nowadays VR180 video, as it leaves some space to crop out your field of vision to reproject it for the eyes.

Rendering a resolution that high with a moderate quality setup in blenders cycles typically takes something around a minute on nowadays hardware using the GPU.

Trying to evade that I came up with three setups which all had their pros and cons. Two of them are closely related.

  1. Use eevee’s default camera with 160 ° fov and reproject that using cycles
  2. 180 ° mirrors
    2.1. Orthogonal projection
    2.2. Classic Camera 90 ° fov

I will give every setup it's own headline to dive into the details. You should be familiar with normal 3D rendering, as I am not going to cover this here.

Use eevee’s default camera with 160 ° fov and reproject that using cycles

What sounds rather odd in the first place, actually works quite well.
You just set up your scene as you’d normally do. If you want to render a 2D preview of what you’re going to have as a result, set your camera to 90 ° fov and test.

If you have a player that supports normal (non VR) side by side video, it's a good idea to first render one of those classic sbs 3D videos, to make sure your 3D camera settings suit your scene.
I'm talking about virtual eye distance and stuff like this (again: you should already know about that).

If you’re satisfied with what you see, widen your field of view to 160 ° while keeping your 3D settings. This time your output should not be side by side, but one video per eye.
I rendered into squares of 4096x4096.

The trick of the hybrid setup is that we are going to use cycles to do the actual equirectangular projection.

I started with a unit cube. Then I scaled it so that two of its sides took up an angle of 160 ° viewed from the center.

Now when you put a 360 ° 2D camera into the center of that volume, it will see two big faces covered by 160 ° field of view each, with 20 ° remaining for the small sides.

This adds up to a 360 ° spherical view.

So all we have to do now, is to take our pre rendered (by eevee) squares and put them on the big faces as a texture.
Set the shader to emission and set every render pass or bounce in cycles performance settings to (cycles performance settings).
We don’t need light bounces here - every incoming ray of light is exactly that. The incoming ray of light we want. Nothing else.

Set up like that, cycles can be surprisingly fast. Unfortunately it's kinda unusable for anything besides pre lit scenes this way.

But still we have a setup that is faster than directly rendering the scene with a VR180 3D camera setup in cycles, even though we now have two render passes instead of one.

The downside of this setup is that we do not get a full 180 ° field of view. You can't set up a 180 ° fov camera in eevee. That's impossible with a rectilinear projection.
You wouldn't be able to place an equirectangular camera inside a helper volume in cycles anymore because it would need to be units thick. There is no room for anything else if you are surrounded by two faces with each of them taking up 180 ° of your vision.

Effectively this, let's call it VR160, vision comes close to what most VR180 videos actually are. Even googles own VR180 creator fades out to the borders with something that might be as well 10 ° of your fov in each direction.

Unfortunately there is another bad thing about the intermediate projection using eevee:
A wide angle rectilinear projection of more than 90 ° greatly overemphasizes things close to the border, while scaling down the things in the center of your projection at the same time. This might be a cool effect for some use cases, but in our case we need to use a resolution high enough to ensure that what is in the center of our projection is still visible with an acceptable amount of detail.

Knowing these unpleasant limitations of eevees camara, I tried to trick it into rendering what I want ...

... by the use of mirrors!

Given my scene is divided by a plane (a 360 ° environment split into halves) and the edge of my mirror lies within that plane, than I want my mirror to reflect everything cut by that plane to the edge of my image, with the rest of one side of the space divided by the plane being projected to the image like a 180 ° fisheye would do.
Having a fisheye projection is actually sufficient for VR180 as it has an option for that. Uploaded to youtube these videos will be silently converted to equirectangular in the background and googles VR180 creator will accept them as input.

The best mirror for this job would probably be something parabolic, but I wanted to spare myself those calculations and went for something that could be calculated with dull trigonometry.

So I went for cutouts of a reflective ball. I tested two types of surfaces. Unfortunately non of both gave perfect results. I guess this has something to do with how the exact reflections at the surface of the ball are modeled. The amount of distortions goes down the higher resolution you sphere is, but I found out that at some point a high amplitude but low frequency distortion seems to be more appealing to the eye than a low amplitude high frequency distortion.

The same thing applies for the two primitive sphere types blender has to offer. There is the ico-sphere consisting of some amount of equally sized triangles and the uv-sphere with is quadric faces of variable size which depends on their distance to the poles.

I ended up using the ico-sphere, because the reflections (and the distortions) at its surface where more appealing to me.

The resulting image does not only depend on the type of mirror in use, but also on the camera.
If one requirement of the projection is, that things that lie on the dividing plane are projected to the edge of my image, than effectively a ray at the edge of the image that is projected through the camera must hit the edge of the mirror at an angle that projects it into the dividing plane.

Well let that sink in. This boils down to the fact that every rectilinear camera and even the orthogonal camera is suitable for this purpose, given the mirror fits to the camera.

For the sake of simplicity I started with the orthogonal camera.

The Orthogonal Camera

The orthogonal camera can be modeled as parallel rays of light inside a cuboid volume. There is no such thing as a field of view. I thought this might simplify things.

The first thing I’d need to do is make my camera exactly the size of the mirror, so the mirror fills the cuboid projection volume.
Now I need to calculate the angle of the mirror surface at its very edge, so that a parallel light ray from the camera will be reflected into the dividing plane.

That's an easy one it's 45 °. My mirrorball must be cut in a place where it's (smaller) angle to the middle axis of my projection will be 45 °. Referring to trigonometry that will be units from it's center (given a sphere with radius).

The rectilinear 90 ° camera

Another quite simple case would be a camera with 90 ° field of view. A camera like that would spin up a cone with at max 45 ° to it's middle axis. So a ray at the edge will not hit the mirrors edge parallel to the middle axis, but with exactly 45 °.
The angle between the incoming ray from the camera and the ray reflected by the mirror into the plane is no longer ° 90 but 135 ° (90 ° + 45 °).

As a result the calculations for the required mirror become a little more complicated.

I do not know why these results in exactly half the angle I needed for the ortho case (probably there is an easy explanation).

This time I need to cut the mirror ball units from its center.

Conclusion

Each of the methods I tried to get VR180 out of eevee has the potential to replace a real equirectangular (or at least fisheye) camera. The first method still needs an additional cycles pass and both of the mirror setup introduce distortions.

I can't even tell which one of the latter I like more. Their distortions seem to be quite comparable even though they differ in the details. So if the goal is geometrically correct projection I’d probably still go for the two pass solution.

share

Recent x86_64 Debian Package

installation

Why this ... again?

After I had updated my blog from the kinda outdated Ubuntu 14.04 LTS to 18.04 LTS, I notcied that the package is bugged, again.
While the installation succeeds and the mail server starts up as expected, it dies (without any error message), as soon as a secure connection is made.
doesn't help here - the mail server is not being restarted. Even if that would be the case, one could only receive mails that come in without any SSL being used during transmission.

This seems to be of little use nowadays and I don’t expect anybody (hopefully) to be willing to use it that way.

Therefore I have built myself a working replacement.

Which version is this?

It's a modified version of the original package, one might obtain issuing the good old

Surely I might as well use, but that didn't matter to me right now.

I adopted the fix from: OpenSMTPD GitHub

The author doesn't build a package there, instead he compiles from scratch directly at the affected server and just runs.
That leads to the situation, that programs are installed to wrong paths and from there on there will be two versions in the system. You’ll need to activate the new one in by hand. The path to the config file will be different, too.
Thus, the usual way of getting an official fix via a normal package update, will be blocked.

Thats why I decided not to follow these instructions and built a full package instead.

After patching with the only file that has to be changed, the package can be build with the usual tools.
The next update should now be able to replace the temporary fix, without any user interaction being required.

share

german version available

Current x86_64 Debian package

installation

Why now again?

After I had just updated my blog from the somewhat outdated Ubuntu 14.04 LTS to 18.04, I found that it was already a bugged package (and that in an LTS).
It can be installed and the mail server starts as intended, but it crashes (without an error message) as soon as an encrypted connection is to be established.
That doesn't help either - the mail server is not restarted automatically. Even if this were the case, mails could only be accepted if no SSL was used in the transmission.
Nowadays that is anything but practical and certainly (hopefully?) Nobody would want that.

So I built a working replacement.

What kind of version is that?

It is a modified version of the original package, which you can use the old-fashioned way

gets.

Certainly that goes with you, but that's not what interested you.

I have taken instructions for the fix here: OpenSMTPD GitHub

However, the author does not build a package, but recompiles it directly in the affected system and then executes it.
However, this means that the programs are in the wrong path and from now on there are two versions in the system, whereby the new one has to be inserted manually. The path to the configuration file is also no longer correct.
This undermines the normal way of getting an official fix at some point via a normal package update.

Therefore I decided not to follow these instructions directly, but instead to build a new package myself.

After patching with the only modified file, the package can then be built with the usual tools.
The next update should then overwrite it again as normal without having to do anything else.

share

Preface

The images here are currently all 8192 pixels wide. This is a problem for many mobile devices.

Possible solutions would be:

  • scale all images smaller than 4096x4096 pixels
    • as a result, a lot of details are lost, which are then also missing on the more powerful computers
  • do something with tiles
  • use another JS library for the viewer
    • I don't like that either, because the better viewers have many times more dependencies and this should remain a minimalist blog (Wordpress was not thrown out for free)

With that I mean: I have problems displaying this post on mobile devices known.
To change something about that would degenerate into quite a tinkering with the pictures and I would also have to make sure that the metadata in the pictures, which make the 360 ​​° panorama possible, remain consistent.
You can also see all of the images on Google.
Anyone who is on the move is likely to have one leg in the Google Universe anyway and will not be bothered by this diversion.

What is it about

For some time now I've been stitching 360 panoramas. I find that kind of relaxing.

Besides, it has the nice effect that I am using a set of pictures really the mood can capture a scene.I like to be outside and so it usually happens that I have no real motive (like a certain building, an object, or yuck a person) that I want to capture, but rather the impression of an entire environment.

So here is the first work

I approached this picture completely unprepared. It was created on November 26th, 2017 at dusk.
That was a problem. But more on that later.
Since I didn't know which angle a single shot would cover but wanted to shoot a complicated HDR series right away, it took me almost 45 minutes in the end until I had all the pictures I thought I needed. In the meantime, the lighting had changed a lot.
The images could no longer be easily converted to HDR and stitching also became a problem.
I was with one a lot helps a lot Attitude approached the matter and had over eight single exposures with bracketing that I wanted to combine.
This is complete nonsense with a viewing angle of over 140 °.
The necessary estimate of the lens parameters is never so precise that the images are 100% congruent in all areas. Due to the mass of pictures and the large viewing angle, up to 4 pictures always overlapped, which the Stichter simply did not want to bring on top of each other. It only got better by chance, when I limited the pictures to four rows in order to get a faster result for a test.

Since then I have known: Too much overlap is not a good thing.

I got further problems because the lens is not in the center of the pictures, but the center axis of the tripod. In the case of objects that are close, there are strong shifts in the objects in the background that are covered by this.

Then there was the discoloration from the slowly advancing dusk. It was a joke.
The last pictures I shot more or less in the dark with long exposure. You couldn't see anything anymore.


Link to the picture

Position on Google Maps

You will surely notice that a yawning black hole gapes both on the ground and in the sky. In later recordings, I was able to at least solve the problem with the sky, since another shot pointing vertically upwards can be inserted surprisingly easily.
In addition, by overlapping higher objects in the panorama, it even helps to stabilize the remaining images in their position, as they usually overlap with all other images.
The problem with the floor is different: that's where I stand. There is the tripod. There are other things that I have with me.
In order to get another picture, I would have to memorize the spot and then hold the camera with the monopod as precisely as possible over this position. Maybe I'll try that one day.

Off to the cathedral!

What do you take when you want to photograph something that the local viewer can identify with? The cathedral, of course.
Probably the most photographed building in Magdeburg. But how do you set yourself apart from all the other recordings when everyone in Magdeburg is probably carrying their own pictures of the cathedral with them somewhere on the phone?
To make stitching easier for me, I decided to take a night shot so that as few moving objects as possible scurry through the area to be recorded.
Incidentally, this is always a problem with 360 ° recordings that consist of several images. Where you can still remove disturbing objects with normal images by choosing the angle of view, this is no longer possible with 360 ° panoramas. Everything that is there is also in the picture.
With the right 360 ° cameras that record the whole scene in one picture, you can always do it yourself.
What I call die-hard selfie fan of course so right Great find.

Back to the topic: On November 30th, 2017, shortly after midnight, I rushed to the cathedral and tried to get pictures for my cathedral picture as quickly as possible, not least because it was quite cold.
At the last picture I was allowed to wait a few minutes, because a young couple decided not only to drive around the Domplatz once, but also to walk diagonally across the Domplatz a few meters past me have to.
Unfortunately, I forgot to take the final shot of the sky for this picture as well. This turned out to be particularly annoying later, as the upper edge could no longer be trimmed straight due to the sheer size of the cathedral.
If I had tried to trim the edge in such a way that at least the unsightly, but at least round, black hole emerges in the projection above, I would have had to cut the tip of the cathedral.
So I had to cut higher and now have a black hole with points on it.
beautiful


Link to the picture

Position on Google Maps

The muddy southern tip

The next motif I chose was the southern tip of the Rothehorn city park. Here, too, I had in the back of my mind to choose a motif that most viewers would know. Again, a motif that is overrun by people during the day and can therefore only be photographed at night without people.
Based on something always goes wrong, this time I had also recorded the sky for the first time, so that you don't immediately hit an edge when looking up, but I still had the aperture of previous recordings set to 5.6. Since I was using a purely mechanical lens, the camera couldn't warn me about this stupidity. The fact that I had to expose for almost 20 seconds for each picture in order not to get even more noise into the picture with tolerably low light sensitivity did not make me think that something was wrong here.
At least it couldn't be really muddy. It was much too cold that night (November 1st, 2017) and the ground was frozen.
If I had noticed the mistake with the aperture earlier, I might have got a clearer picture and the stars would not have left any traces.
By the way: Star trails are a really great thing when stitching. Especially if they are in a completely different place in the next picture.


Link to the picture

Position on Google Maps

The black Kruger

Or also: Finally something simple

This picture was taken on a bike tour on 03/11/2018. Blissful -13 ° C guaranteed me the absence of people.
At least except for the few who drive the SUV into the hinterland at the weekend and look there for a justification for buying such a nonsensical vehicle.
The black Kruger is probably not the correct title, as it is more likely to refer to an inconspicuous pool between the trees further back in the picture. Otherwise there is nothing interesting in this picture. I just liked the lighting.
I had to realize that this would tend to be a problem when stitching, because when the weather is reasonably clear, with 360 ° panoramas the sun can also be seen somewhere in the picture. This was already quite low above the horizon.
Speaking of Horizont: It was somehow not possible to get that in this picture halfway straight. There was always a curve somewhere. Maybe that's just the way it is in that place, but I probably don't have the correct curvature as a reference for now just taken.


Link to the picture

Position on Google Maps

same day, a few kilometers further

A little later, I was just about to do a little lap in the Kreuzhorst, and I got quite lost. In other words, the path I use just ended in the middle of the forest.
I could have turned back now and continued my path somewhere else. Stubborn as I am, I preferred to lift the bike on my back and trudge straight ahead through the forest. I found this very interesting position for me.
The absence of snow and ice masked the fact that it was probably a little colder here than in the previous picture. Therefore this place was only inviting to a limited extent for a break.
After all, there was enough time for a picture in which I set everything up for the first time so that in the end I am satisfied with the result myself.


Link to the picture

Position on Google Maps

it is also less time-consuming

In the meantime I have one real 360 ° camera bought. Not because I expected to be able to take better pictures with it (on the contrary), but because I can use it to take 360 ​​° panoramas without a tripod or equipment.
The Photo Spheres that today's smartphone cameras can stitch together live were too flawed for me and I didn't really throw myself into expenses.
For me, as already mentioned, a huge disadvantage: You are always in the picture yourself.
In defense of the technology, it should be said: The more expensive models, which do not come as a smartphone attachment, but bring their own hardware, then create slightly better images.
What I find fascinating about the technology is that the lenses used in most cameras have a viewing angle of over 180 °. Otherwise there would be no overlap with only two lenses. In principle, more than two lenses would be desirable, as there is such a clear ring around the recording in which the overlaps meet, but both lenses no longer provide detailed information due to the extreme optics.

By the way: For the same reason, I use four images for the panorama images I took with the classic camera to get around one time, instead of the three that I would be able to get along with from a purely mathematical point of view without any problems.
However, the distortions in the edge areas are so strong that the image would then have a very uneven sharpness of detail.

Sharpness of detail is available for this in this Picture nowhere anyway.
But it is probably the only one that can also be displayed on mobile devices, because the resolution is correspondingly lower.


Link to the picture

Position on Google Maps

and again Kreuzhorst

With this picture I actually wanted to document the suddenly emerging spring - but there is not much to see yet (25.02.2018). Maybe the grass is a little greener than the weeks before.
A week later, however, the trees were full of leaves. I was probably too early.
With this picture you can see that it is quite difficult to get a coherent white balance with 360 ° panoramas. Something always works wrong.
In fact, our eyes and the brain itself adjust the balance so quickly, depending on where we are looking, that we seldom notice different colors of light.
If, on the other hand, you have strongly colored light in an area in a panorama, you can either leave it that way, which then leads to strange color gradients at different zoom levels, or you try to correct this and use a reasonably appropriate white balance globally.
The sunset in this picture partly colored the picture very organge, but this could not be seen on the other side of the picture.
This creates a very artificial color impression globally.


Link to the picture

Position on Google Maps

share

The tool curator is a ticking time bomb in modern IT - especially in the web field can be found in almost every company. In the following, I would like to explain in detail why this mostly harmless-looking person is downright toxic for any company.

TLDR; Straight out: the tool curator is an irresponsible idiot. Unfortunately, he is not aware of this.

There is it in two forms. With one there is still hope, with the second hops and malt are lost.

The youngster:

The first expression is a beginner who cannot even be called a junior anything. He is very well aware of his deficits and is motivated to catch up with them as quickly as possible.
Instead of dealing with the what it should actually do to keep busy, he looks for that fast Success.
He always tries to be brand new and is therefore well informed about the latest buzz / hype. Since his skills are not sufficient to create things successfully himself, he is constantly looking for tools that fast Promise solutions (and with fast does not mean efficient here).
If the program weren't so bad-tempered, he'd use Dreamweaver just to avoid doing what it was supposed to be doing.
For him, being up to date has a method. None of his tools are older than two years. Guaranteed not enough time for anyone to notice it Expertise could have built. So the bar for belonging to the avant-garde is promisingly low.

The scrap iron:

This subject is not really a beginner. He would have easily had enough time to make it to senior anything in any field.
However, he lacks self-confidence or trust in his own work.
Since he is quickly frustrated and gives up when faced with problems, he has not acquired any noteworthy skills in his long working hours and is now noticeably lagging behind old colleagues.
That is, it would stand out if he allowed it and delivered anything that could be compared to the work of his colleagues. But he deliberately doesn't do that.
Or he has become a master of pretending over time. It certainly delivers results that are perceived by project managers and other decision-makers - and these results are convincing at first glance because they work on its system and an early live project. It is difficult to argue against success.

This is precisely why it is so important to do it anyway, as long as the tool curator has not yet fully developed its destructive potential.

In very rare cases, the tool curator himself sits on the management board. There, however, he does not develop his strange goings-on in order to compensate for his own inadequacies. He also doesn't use the tools himself, which makes it a lot more difficult to talk to him about because he doesn't even know them.
He doesn't need to, because he is not in a position where he has to implement what he is planning. He is driven of the two keywords easy and fast.
Both terms are synonymous with in management profit. It is not for nothing that they are the most common buzzwords on the websites of tool providers.
With the use of the latest tools, you can keep talking and sell yourself as young and dynamic.
In my opinion, the gamer mentality is better.

The tool curator has a good standing in today's IT world. The time pressure is enormous and the budgets hardly leave any room for proper analyzes and tailor-made software.
It is not uncommon for the tool curator to be the first to present a working solution. The following principle applies:

Whoever delivers is right.
But how does he do it?

He is a master of the search engine. On top of that, nobody can do anything to him in the field of tools. He hardly does anything else. While the rest of the company hangs in nasty little details and tries to solve problems with older software or to implement detailed customer requests, they are constantly looking for ready-made solutions. With his comprehensive tool know-how, he can determine the right combination of tools in an impressive time. Then there are other tools to combine the tools, tools to automate the assembly and the prototype is ready, which can be deployed immediately to production with the appropriate tools. Management and company management are enthusiastic.

READY. LOAD MAGIC, 8.1 SEARCHING FOR MAGIC LOADING READY. RUN

In this way, through his apparent success, he manages to leave the competence that he does not have as superfluous.
A relic from the beginnings of IT, where it is still shrouded in legend Experts | who had acquired their know-how through loss-making failures and, in particular, overcoming them and who had to put a multiple of their wages on the table for the companies to work for them.
Less salary, more success? A win-win for tool curators and companies.
Bonus: Anyone who works in the company can become a tool curator irrelevant is enough to give him enough time to deal extensively with blogs, tool magazines and the tools themselves.

Many will think at this point:

The success proves him right!

If that were the case in the long run, I wouldn't bother writing these lines.
Envy cannot be the cause of this, because as I mentioned it is very easy to become a tool curator.
So I would have had enough time myself, coupled with my old-fashioned know-how about Master tool curator to become.

So what is it that makes this person so dangerous for their colleagues and the company?

The tool curator has no idea!

Like right now? Hasn't it just been explained that thanks to tools he will still be successful?

An example: The tool curator has no idea what “the cloud” is.

What we know today as “the cloud” was developed to solve problems with the infrastructure of large data centers. These include:

  1. Server monoculturesthat cannot be used to capacity because their size does not really fit for any application.In addition, the purchase of additional servers took far too long in the event of bottlenecks.
  2. Networks that had to be repositioned by handwhich was time-consuming and error-prone with masses of cables.
  3. Insufficient encapsulation of the servicesinstalled on the same servers to increase the efficiency of the data center, which can lead to unpredictable side effects that are difficult to test.

Cloud providers have long since got a grip on all of these problems with their own proprietary solutions. They offer:

  1. Different instance classesso that there is a suitable instance class for every service and you can start any number of the same instances of this service. If the class no longer fits, you just take a different one.
  2. Completely virtualized networksthat allow you to quickly configure virtual firewalls and switches and just as quickly to connect and isolate services as you like.
  3. By Virtualization of the instances, or the provision of real systems in various sizes, the services are always encapsulated from one another. Side effects between services that result from the shared use of hardware no longer occur.
    How you can now tell that the tool curator has no idea about cloud infrastructure, is the simple fact that he doesn't use it. Instead, he uses containers.
    There is nothing wrong with containers at first, but he is used they wrong.

Container infrastructure was developed with exactly the same goals that gave the impetus for the cloud. Solution 2 is more or less equivalent, the other two points are briefly outlined:
Containers do not need any instance classes. Instead, one tries to run several services on a shared operating system kernel, as was the case before the cloud and containers, but to isolate them better through additional measures so that fewer side effects occur and many services can be stacked on one server, until it is so busy that it runs cost-effectively.

Containers do not solve new problems here, they are Likewise Designed for large data centers with complex networks and server monoculture.

The utilization cannot be used as an argument either.
In the cloud, the smallest instance has roughly the performance of a Raspberry PI and you shouldn't go much smaller for microservices either, in order to have a little room for the unexpected and bursts. It is not for nothing that the smallest instances at AWS offer the ability to briefly offer more performance in the event of bursts on the basis of accumulating credit points.

So what problem does the tool curator solve with containers in the cloud?

The tool curator would like to make the well-known “it works on my system” a valid argument as a top priority. Cloud systems cannot be executed on his system and must be developed in the cloud using the provider-specific API.
Since the tool curator buzzes up in the company, he has no access to it in his development phase and cannot shine with automated cloud systems.
He needs a demo system that can be seamlessly transferred from his development machine to live systems. He likes to argue that his system is provider-independent and would theoretically even work across multiple providers.
This is true, but operationally complete bullshit due to the latencies, which is why portability between providers would be an argument at most. Anyone who knows the qualities of the providers will not want to change providers in the middle of the project.
Even if it were, it would be a fairly laborious process with containers as well.

So what's so bad about containers in the cloud?

The cloud and container are two similar solutions to the same problem. Since it makes no sense to roll out only one service per micro instance in containers, the tool curator is first reducing all the advantages of the cloud.
He doesn't use any of it. Instead, he builds a well-known data center with cloud infrastructure, in which he can then work with containers. This is obviously nonsense from an operational point of view, but also from an economic point of view.
As the instance size increases, there are seldom price reductions; instead, the prices rise faster than the performance of the instances. Setting up a data center for containers in a data center that offers cloud virtualization is therefore more expensive than using cloud virtualization with its solutions for the same problems directly.
In addition, it does not make sense to include another abstraction layer that does not offer any further use.
If you can build a container image, you can also build an instance image for the cloud or deploy your software directly to a suitable base image.
No container is required.

The approach is just as absurd as using containers to assemble VMs in them and deliver VMs with these containers.
The Fortunately, nobody has tried media-effective, which is why we can still perceive it unbiased as absurd.

You can't scale faster than in the cloud with containers either, because the system would start to oscillate if you didn't average the measured load to some extent and react in seconds instead of minutes, which would in principle be possible with containers. So there is no operational advantage here either.
Most providers have their own cloud infrastructure with an excellent API and can be easily automated, for which there are also serious tools. Here, too, the container in the VM does not offer any advantages.

As an aside: Google's container engine is a well-functioning implementation of the container approach, without an additional layer of cloud techniques that would be duplicated with containers. Deploying containers in this way makes perfect sense.
The but the tool curator does not.

So if the tool curator creates nonsense, why do you let him go?

The problem lies with what is considered success in management floors these days Are defined becomes.
Because there, too, the bar is lower than ever before. Success is when the setup behaves in the tool curator's development environment as requested by the customer.
Thanks to tools, this environment can be transferred one-to-one to the productive system, isn't it? Or not?

No!

All the tools and abstractions with which the tool curator works will never be able to create the same conditions as they exist on the scaling productive system.
Especially not when the system comes under a load that cannot be processed with the stylish extra slim sub something from the tool curator. Not to mention the even bigger problem of creating a realistic load at all (I'll come to that later).

In short: the tool curator is not testing properly!

How then. He doesn't even know what to test. If he did, he still wouldn't how. He does not know the components of the system he is tooling together sufficiently to be able to test them.
He fully relies on his community of tool curators to have already done so.
He does not consider that its composition Tools has never been tested in combination, because otherwise the product would already exist and he would not be sitting on it at the moment.

When he deploys a database, he does not know what is in its configuration. Ideally, default values ​​- worst case: some undocumented hack that another tool curator copied from a blog - without understanding it - to solve a completely different problem.
He also does not know that his database, which is planned as a distributed, horizontally scaling database, will never scale efficiently because the backend - which he also knows nothing about - sends a read to the same data after every write in order to update the frontend and test whether the data was actually written.

Explanation: Therefore, all DB nodes have to wait until the data has been written to all relevant nodes (100% consistency requirement).
It would be correct to update the front end with the data that has just been written and to wait asynchronously for the write to be completed in order to react to errors if necessary.
A database that does not notice that its write has failed is not a database.

Why don't you do that right away?

Because then you could not use the same function that you use for normal data queries and error handling becomes a lot more difficult if the system has already continued and the error has to be corrected afterwards.

A system that blocks until the read is done after the write is just much easier to write. It just scales worse than a single node that doesn't have to synchronize data.
Tool curators who write tools for tool curators.

The tool curator deployed a web server - guess what: default values. He could not say how many connections the system can hold at the same time, when he has to scale due to lack of resources, etc.
And of course that's just one example.
Configuring a web server optimally for a specific project is an art - which the tool curator cannot master.

But why a web server? Nowadays, every programming language known to the tool curator either brings its own web server with it, or has a project in the community that does just that and really creates buzz.

Let's just let the big, bad internet hammer this project directly and, in the worst case, crash with a nasty little request that could have been filtered out in the web server. Over and over again.

Not enough?

The tool curator will set up a CI pipeline for you in no time at all, which immediately throws every small commit onto live. Human QA? Abolished at the moment of going live (Ooops!).
A few days later it is noticeable that every piece of crap ends up on Live, regardless of whether it works or not. Thanks to CI, staging and beta systems are no longer needed, who should test them if every commit should potentially go live immediately.
So automated testing tool. Quickly a new tool with which the whole company can be quickly and easily converted into testers.

How it works?

All fast and easy: The software is not tested, but a previously human front-end QA is automated with a tool. In essence, a click path is automated, which checks the output for every input imaginable on the fly.

If you manage to achieve 100% test coverage, isn't it all good again?

No it is not. This means that only 100% of all possible (incorrect) entries via the front end have been tested.
The frontend does not send a UI click path but rather plain HTTP requests.
These can be rephrased as desired, thus allowing a large number of attacks that let the coverage of the front-end click path fall back into the per mil range.

But that should also be disastrous for “normal” projects, right?

Only in part. As already mentioned, you would normally filter requests in the web server that do not control any meaningful function in the backend. In addition, in old-fashioned and slowly developed projects, each component was usually touched, configured and maybe even tested a bit by someone in the company.
Hence a good part of the weaknesses known and if it wasn't done beforehand, there would now be enough know-how to write backend tests and check the interaction of all components.
You could even build a reasonably functioning CI pipeline. Retrospectively and under time pressure, you probably break more than you correct.

But a third class of tests is still missing: the Load tests.
Under no circumstances can load tests be built from front-end tests.

The tasks are disjoint, not synergy. Nothing. Nada.
Just forget everything that some startups or attention-grabbing blogs want to tell you.

How so?

Simply generate the load that at least a thousand (maybe even a million?) Browsers can generate with automated browser clicks! On which system? Which server (cluster?) Can manage more than ten to twenty completely independent full blown browsers at the same time? How do you model realistic usage patterns from these pure function tests? No idea? Neither do I!
In my experience, the best load test comes from a recording of the live traffic of an early prototype. But this recording is not so easy to get out of a tooled system. There is no reasonable place to hang a wireshark or * dump in between.
Even if you had one, you could only drive replays. However, many systems are absolutely not stateless (not to be confused with a possible stateless API), so that, for example, you cannot post the same invoice twice and achieve the same behavior.
Each replay only results in an error message, which probably creates a lot less work on the system and again does not result in a realistic load behavior.

At this point the tool curator left us a minefield made up of nothing but black boxes.

Components that the tool curator cobbled together without a review, actually all of them individually and in combination should have tested.
Automated test cases could have been written for it, or at least downloaded and installed.
The time that was saved before going live because the tool curator is not familiar with the individual components of his epic spaghetti monster from PHP, Ruby, Python, NodeJS and completely unknown binaries is now missing live in front of the rightly dissatisfied customers.

To put it in numbers, roughly five months of regular development time was pulverized into four to six weeks. The missing know-how must now be acquired during the accident. Under enormous pressure.
This reduces your concentration and, in the worst case, you work more slowly and more error-pronethan in the regular development phase. But the tool curator does not do this work. He lacks the qualifications for this.

If the company survives, most likely not even the tool curator will have to turn his head for it.
It can draw on a broad community of

But that also worked with XY!

called, all of them cool Boilerplate websites have on that again only easy and fast stands, but nobody goes into detail how it actually works concrete Has solved the problem of what the specific areas of application for your tools are and most importantly: what they are better for Not should use.
Everyone works like that these days anyway, and it was probably just the bad infrastructure that was to blame, which the Tooklurator did not help to create, of course.

You hear surprisingly little about this nowadays and you could rightly assume that I am a black marketer.

Unfortunately it is the case that in Germany there is still no pronounced “Why we failed: ”Culture gives.
The few lectures that exist are mostly concerned with mismanagement and startups whose business idea was utter nonsense.
Those who have been successful draw their eyes to themselves and determine the direction of the hype - where this approach fails, everyone fails quietly for yourself.

Of course it is let it go of a tool curator also extreme mismanagement, but in the executive floor everyone dreams of being a little bit like Google.
I also work with an incredible number of tools.

Only that Google writes its tools itself, certainly also massively tests them and has special ops tool teams in hot standby who have time for problems while the regular troops continue to work normally.

But you only have one or two tool curators whose greatest achievement would be to write a tool that automates other tools. Untested. Which in extreme cases is loaded en masse by other tool curators because they trust that the tool curator community will have already tested it.

For clarification: I expressly do not speak out against software reuse. This is one of the cornerstones of the open source community.
C is software Ruse from assembler.
Go is software reuse from C.
Every program that links a library is software reuse.
Each distribution only compiles existing software.

But that's where it lies difference to drive the tool curators:
A distribution brings each library only once per revision and mercilessly test all linked programs to see whether they work with them. In this way, you can safely continue working within a stable version without worrying that a dependency could change at any time.
The tool curator automatically creates a ball from outside loaded software that can only be stable in the temporal environment of the last manually monitored build, even if every component in its container were stable. This software is due to one-time adapted dependencies unmaintainable.
The release cycles (if you want to call it that) of the tools are far too short and what worked once can be impossible again with the next version of the tools.
In this way, the tool curators create reproducible builds at exactly this moment, for which there can be no long-term stability guarantees.
If the versions progress, it can quickly be that a comparable project with the existing tools can no longer be put together without massive manual work.
_The time at which the actual work has to be done slips behind the supposed live go and now has to be pushed in without a budget at the expense of upcoming projects.
_Or you let the project die - after all, it is over a year old and the customer cannot be won over to a new budget that will be significantly higher than the first one.

Is the CI pipeline one of them? "Runs yes" Once a project is defective, the next successful build of the same project will be a lot more difficult than just migrating to a newer version of the same distribution, because now defaults no longer apply and the build process has to be adjusted. A configuration that is compatible with the first go live must be created. The first configuration is largely unknown thanks to tools. So you can either not make any security-relevant updates despite the CI, or you have a permanent construction site with the CI, which devours more time in small companies than if you deliver the software old-fashioned in tested releases.

If you have masses of employees who all work behind the same CI, it pays off to assign a team exclusively for the CI.
I have never seen a regular time allocation in small companies for this. So it is not uncommon for this job to be left to a single person part-time, or the CI remains as it is until the end of live of the project because new projects have long been pending that need tools and CI.

Hence my well-intentioned advice:
If a job in a company with a tool curator beckons - don't take it.
If a tool curator has developed in your company - quit.

I am not saying that I have a patented solution on how to remain competitive today with well-developed software, but: If that kind on "Will be fine" to gamble, really should be the new kind of software development, that will probably only correct a row by bankruptcy of the "bad players".

Something current on the topic on Heise:
Martin Thompson: _ "We have to go back to simplicity and elegance"
_

And with easy is certainly not here "you don't have to be able to do anything" meant.

share

In the next post I would like to explain my thesis that system administrator or DevOps engineer has now become one of the most thankless jobs in average web shops and try to make this understandable with deliberately exaggerated examples.

In my opinion, the administrator in the company bears one of the greatest responsibilities and a risk that has recently become more and more incalculable. Especially the latter is the young motivated In the rarest of cases, consciously replacing them.
Of course, the salaries are usually not related to this.

Those who could do something about this situation do not want to be bothered with details.
For a long time I suspected bad intent behind this. Today I'm not so sure anymore. Rather, the problems of the infrastructure are often too complex and require some specialist knowledge in order to be able to understand them at all.

Nowadays, DevOpse are often sought as freelancers who should supposedly solve problems identified in the infrastructure, automate things ™ and then become dispensable again.

In my experience, the problems that DevOps is supposed to solve are not so often in the infrastructure. As a result, the DevOps is automatically faced with a problem that it cannot solve at all, but (especially in the case of the freelancer) is given responsibility for it if the problem does not go away even after longer efforts and the associated costs.

So I think that experience in this profession kills any motivation.

This was actually meant in a rather humorous way and I thought to myself, who knows what I'm talking about, will grin and probably continue undeterred in his Twitter stream.

But of course there is always a troublemaker who has to put every word on the gold balance ...

... and with a little hint á la use your own brain can not get rid of.

So now you know: @plappertux is solely responsible for making this a very long blog post:

The tool curator

share

The attempt, at least simple Offering real-time 3D graphics on websites has always been painful. Before WebGL was finally taken over by Microsoft and Apple, there were some technologies that tried to become the de facto standard for 3D graphics on the Internet.

Unwilling to support everyone, I asked myself the question:

What if I'm not using any rendering technology at all?

Obviously then I have to do the complete rendering myself.

Sounds like a challenge? Well, that depends on what you're up to. If it's all about a trivial projection, a simple light source, and rudimentary textures - it's not as difficult as one would think.

Since I learned 3D graphics using OpenGL2.1, I never really got used to the “new” freely programmable pipeline in today's OpenGL / WebGL. I liked the definition of free not in it, because it is not about free as in you can do that, in fact it is a you have to do this!

Although I've written some OpenGL4.0 code by now, my shaders were copied one-liners.

Having said that, I worked on this project for two reasons:

  1. to finally understand what shaders are and what can be achieved by using them
  2. to find out if the performance might be sufficient to be used as an emergency alternative when WebGL is not available.
    Just to be clear: I know that there is 3D rendering support in Flash and Miscrosoft's Silverlight. I think these technologies are simply unsuitable for meeting today's requirements of the Internet - especially if you are working towards portability and do not want to build the same thing with several different proprietary technologies.

What you need to start rendering is exactly what you would need for hardware accelerated rendering - but I'd like to briefly list it here.

  • a vertex model of the objects you want to render
  • Texture mapping coordinates
  • Normal vectors for triangles

a vertex model of the objects you want to render

Since I wanted to render a globe, this was pretty easy. All I needed was a 5120 sided icosahedron. That's a pretty high number of triangles for an object that should only be rendered by software. But the next smaller number of triangles that would have been mathematically possible would have only had 1280 of them, which looked more like something you would roll in a tabletop game than like a ball. Although this makes texture mapping a little more complicated, I prefer the shape of the icosahedron to the usual approximation of a sphere because all of its sides are the same size.
The wireframe view of a normal sphere approximation seems unbalanced to me. Your sides become infinitely small at the poles.
I'm sure there is an amazingly simple formula to calculate my icosahedron in code, but I only applied a few regex replacements to a model that I exported to Blender as Wavefront .obj. Since I have absolutely no idea how to transfer a world map onto an icosahedron, I had to rely on Blender anyway.

Texture mapping coordinates

As I noted earlier, this is black magic to me. To map a texture onto anything other than a rectangle, one needs a mapping of the triangles of your object onto the parts of the texture they are supposed to fill.

It seems like there is no perfect solution for an icosahedron based sphere. Simply clicking on UV unwrap was sufficient for my project.
As you can see, an icosahedron is also not an optimal object for mapping a texture onto a sphere. I doubt there is one.
As long as you want to map a right-angled surface onto a sphere, the top and bottom of the texture are mapped onto the poles, which means that the entire area is projected onto just a few triangles.
Amazingly, Blender generated some texture coordinates that were larger than one. These should usually wrap around the texture. I chose to ignore them, however, as it would add some extra "if" s for every single pixel to be rendered and I didn't expect my approach to be quick enough (ultimately, I was right) .

Normal vectors for triangles

Again you can simply calculate normal vectors in your own code (cross product), but as long as the objects have a static shape that can only be modified by the model view matrix (I'll get to that later), you don't need that.
Normal vectors come in handy for a few things. They help you to find out whether a triangle is pointing inwards or outwards, they are needed for light calculations and they help to find triangles which cannot be seen from the camera perspective because they point in the opposite direction.
This (backface culling) is the only technology I have normally used vectors for. CPU-based texture mapping is a very complex process, which is why every triangle that you don't have to draw helps to speed up rendering.
Of course, this can only be applied to non-transparent objects. On the other hand, triangles oriented away from the camera could and should also be seen.
I was able to get everything from Blender except for the texture, which left me ready to dive into the rendering process.

To do:

  1. Project vertices from 3D object space to 2D screen coordinates
  2. remove invisible triangles
  3. calculate which pixels are really in a triangle
  4. Map texture pixels onto triangles
    All of these steps are usually performed by the hardware. At least for steps two to four we are more or less used to the fact that the hardware behaves like a black box.
    I'm no exception to this so I had no idea how to do this, but let's start at the beginning.

Project vertices from 3D object space to 2D screen coordinates

It doesn't matter whether you want to use hardware acceleration or not, you still have to create some matrices to project your vertices.
Well ... that's not 100% true. In fact, you can compute your vector algebra entirely without the use of matrices, but that doesn't make the process any faster.
So I used the same matrices that I would have used for WebGL.
It increases the performance considerably if you multiply the matrices together beforehand (at least for all vertices to which the same transformations are to be applied):

combined = VPcorr * projection * camera * model

Explained from right to left:

: represents all transformations which should only be applied to a single object or only a part of it. (typically rotation, translation and scaling).

: if the camera moves, this matrix should be shifted and rotated. This keeps the camera independent of everything else. If you're nerdy, you can apply the reverse transform to anything but the camera instead.
So if the camera takes three steps forward in any direction, you can move the whole world in the opposite direction as well.
So the camera matrix is ​​just a convenience construct.

: This is the place where it is magical. I will not go into details. If anyone is interested, they should read the relevant paragraphs in the OpenGL documentation. There I learned how to build this matrix component by component.
If one wants to achieve an orthographic projection, there is little more than scaling and shifting in this matrix.
On the other hand, if you want a perspective projection like me, it gets a bit more complicated, but again I spare you the details.
There is one last detail to mention: If you don't use the hardware-accelerated rendering pipeline, you have to divide each x and y coordinate by the associated z coordinate. The hardware does this implicitly. If you forget that, the projection will look more like a faulty orthographic projection.

What about VPcorr?

I'm glad you noticed. Now that you ask: This is another calculation that you don't have to do when using hardware rendering, as the hardware automatically adjusts the virtual projection to screen coordinates.
The coordinates obtained from the projection are still between minus one and one. Obviously this is not very useful for putting pixels on a canvas whose coordinates are always positive and which originate at zero / zero in the corner.
VPcorr scales the coordinate system by half the size of the drawing area in each direction and moves the origin to the center of the drawing area.
From this point on we can use these coordinates directly as coordinates on the drawing area. There is still a z coordinate that we will use later, but we no longer need it for the step of getting drawing surface coordinates.

Now that we have a combined matrix, what do we do with it?

The hardware accelerated case: You didn't even have to multiply the matrices - instead, it is enough to pass them to a vertex shader, which does the calculations for us.

If you really do everything yourself: Multiply every single vertex of the model by the combined matrix, save them and don't forget to divide by the z coordinate at the end.
The result in the memory corresponds to the vertices projected onto the drawing area for a single image.
Obviously when the object moves you have to repeat everything for the next frame, but that's not the part that heats up the CPU just yet.

Remove invisible triangles

Since the speed of rendering depends largely on how many triangles you want to render, it's always a good idea to remove those that would otherwise be overdrawn by triangles closer to the camera.

As long as you don't work with transparencies, the simplest technique you can implement is backface culling. All you have to do is do this:

  • Apply the modelview and projection matrix to the normal vectors
  • calculate the scalar product between the normal vectors and the camera
  • discard all triangles for which the result is greater than zero, because it represents an angle of over 90 °, which can be interpreted as facing away from the camera
    Sounds easy? I have to admit, I screwed up somehow. The additional application of the projection matrix led to strange results, so that I only used the modelview matrix before I formed the scalar product.
    This is inaccurate as a perspective projection changes the normal vectors as well. In my case, there were more facing triangles left than my backface culling implementation could find.

If your backface culling is not good, or you work with transparencies, or you work with objects that are more complex than standard geometric objects such as spheres or cubes - in short: always - Then there's one more thing you should be doing:

All triangles need to be sorted according to their position on the z axis.

If you don't do that and just draw all the triangles in the order they are in the buffer, then you will draw triangles that are further away from the camera over those that are closer. If you are a little sensitive, it could make you vomit - your brain has never seen anything like it (I'm serious).

I just used QSort with a callback function that compares z coordinates.

Please do not get this wrong now: Sorting the triangles according to their z coordinate does not, of course, remove any further triangles that do not have to be drawn. It just protects against drawing them at the wrong time.
They're still there, but you can use them to make sure that triangles closer to the camera are drawn over them.
At least the result on the screen is now correct.