Real-time Linux

My opinion has always been that the Linux-RT project was based on an unfixable engineering error.

 

A few words on the status and the future of RT:
-----------------------------------------------

The situation since last years RTLWS (https://lwn.net/Articles/572740/)
has not improved at all, it's worse than before. 

While shortly after RTLWS quite some people promised to whip up proper
funding, nothing has materialized and my personal situation is worse
than before.

I'm really tired of all the politics involved, the blantant lies and
the marketing bullshit which I have to bear. I learned a few month ago
that a certain kernel vendor invented most of RT anyway and is the
expert in this field, so the customers dont have to worry about my
statements.

Just for the record: The initial preempt-RT technology was brought to
you mostly by Ingo Molnar, Steven Rostedt, Paul Mckenney, Peter
Zijlstra and myself with lots of input from Doug Niehaus, who
researched full in kernel preemption already in the 1990s. The
technology rewrite around 3.0-rt was done by me with help from Peter
and Steven, and that's what preempt-RT today is based on.

Sure, people can believe whatever marketing bullshit they want, but
that doesn't make the truth go away. And the truth is, that those who
claim expertise are just a lying bunch of leeches.

What really set me off was the recent blunt question, when I'm going
to quit. What does this mean? Is someone out there just waiting that I
step down as preempt-RT maintainer, so some corporate entity can step
up as the saviour of the Linux RT world? So instead of merily leeching
someone seeks active control over the project. Nice try.
http://lwn.net/Articles/604632/

The Horrors of Software Patents

US Patent Number  4701722 A is a perfect example of everything software patent opponents hate about software patents:

  • It implements  mathematical functions that are pretty well known.
  • It covers a process of changing information not changing any physical object one could touch.
  • It is owned by a company with a business model depending on licensing fees, not manufacturing things: What many people would call a “patent troll”

US 4701722 A is is one of the basic patents of Dolby B noise reduction awarded to Ray Dolby back in 1987. It’s not a software patent, it is a circuit patent.  The usual arguments about why software should not be subject to patenting make little sense. If a circuit can be patented, then it should be possible to patent a program. The beginning of the  study of computation as a mathematical subject was the discovery that a simple general purpose engine reading instructions from some sort of store can emulate any circuit. The early computers were “programmed” by physically moving wires around. The photo below shows ENIAC being programmed. A “program” then was obviously a circuit design. Technology advanced to that these circuits could be stored in memory and on disk drives. But that did not change the basic process – writing software is conceptually the same as wiring up an electrical circuit.

eniac

And around the same time Nyquist/Shannon showed that analog and digital signals could be equivalent to each other.  Ray Dolby knew, and noted in this patent, that the analog signals transformed by his invention could be transformed into digital information and transformed back – as convenient. If there is an intrinsic flaw in the concept of software patents themselves, something fundamentally distinct in software that makes software inventions impossible, then the critics of software patents have failed to explain what that could be. See also Martin Goetz which I found via the anti-software patent arguments (1 and 2) of Tim B. Lee (who is not Tim Berners-Lee).

 

The technology disconnect

What it did was reinforce a point about the sociology of management: From cars to space shuttles, from offshore oil wells to nuclear reactors, the people who make the decisions are often out of step with the mechanical details.” – Mathew Wald, New York Times. 2014/06/09.

We sell pretty complex software that synchronizes clocks of computers, down to nanoseconds in many networks. Our customers are often firms where where IT staff, let alone higher management, are consumed by pressing issues, none of which have to do with the nuts-and-bolts of how ideal clock algorithms and time protocols interact with network equipment, operating systems, and application servers. Our easiest sales are to companies where some person is able to get perspective on the technical issue and relate it to business priorities. Otherwise, things get lost between IT staff running hard to keep up with day-to-day and big picture matters, and higher management who are “out of step with the mechanical details”. This is why the most successful big technology companies have built up corps of “technology fellows”, in practice if not formally. These are experts who can take a sufficiently deep dive into technology to appreciate the problem, who also understand business priorities, and who have management confidence. If someone with those skills had been involved in GMs key ignition lock discussions the company could have made far better decisions. As more and more firms become dependent on critical technology infrastructure, they will need to acquire people with such skills, in-house or not.

also on LinkedIn

gigasync2  The story we were told at a bank that I cannot name is that all their time synchronization was the domain of an engineer tucked away for 30 years in the home office, a fellow known as Professor Time. The system he had built was remarkable in its complexity and fragility. Nobody seemed to understand how it worked. Accuracy was highly variable. There was no management, no documentation. We never got to meet the Professor, but I always thought of him as something like this.

Mattheus_van_Hellemont_The_AlchemistIn the past, the enterprise time synchronization market was composed of vendors that sold boxes to IT teams that were responsible for architecting a solution from GPS clocks and free software that was not at all designed for the job. Time synchronization is a specialized field that is a lot more complex than it may appear. And quality has become more of an issue as trading speeds and volume have increased so much.  Some firms have responded to the change by scaling up their synchronization staffing. Some have relied on luck. Some hope that heavily marketed new technology in PTP aware routers will solve their problems.

Over the last couple of years, we have been building out technology to provide financial trading firms and other organizations that need precise timing with an alternative to the boxes+ custom-in-house approaches. We have built client and server software that is fault-tolerant, cross-checking, with sophisticated alarms, easy configuration, and graphical web management/data-analysis tools. And we’ve put that software in powerful server computers that can serve time directly at 10Gbps (and better) and that have all the standard enterprise features (like lots of storage for archival records and dual power supplies). All the parts are designed to work with each other and to connect in a flexible, resilient time distribution network. Our new partnership with Spectracom brings their extensive hardware expertise, distribution and support infrastructure into the mix.

Back in the day

This is an embarrassing confession when I think back on how little I knew and how much I thought I knew.  At the height of the dot-com/Linux boom, maybe 1999, picture a restaurant in Palo Alto, one of the favored business dinner, super expensive, not-so-great Italian ones they have in the Valley. A group of men (the diversity of Silicon Valley) is discussing the possible acquisition of our infant business by a new start up that already had serious VC funding. I was basically a technologist/academic and didn’t know, or even have a guess about, the extent of my business ignorance.  Everyone else at the tacoyote mouseble knew, though. There was a lot of heady talk of big money and the usual Silicon Valley bombast about changing the world to go with the multiple bottles of wine and at some point, for no good reason that I can imagine now,  I blurted out “I’m not primarily in it for the money“.  Maybe it was just nerves. The primary VC looked at me, over the table, with an expression of great, vast, immeasurable, satisfaction and said something to the effect of, “Don’t worry, I will take care of the money.” And I suddenly understood. 

 

Heartbleed and open source

The Heartbleed bug was caused by a business model error. When we were in the real-time software business, our best customer was an old line manufacturing business that wanted to make sure before they qualified us as a vendor that we were making a profit from selling software to them. They did not want to depend on complex engineering products made by a company that would be unable to afford quality control process or that would not have a motivation to use quality control. This level of clarity is not all that common and the complexity of open source business models confuses people. Linux is a generally reliable system because RedHat is able to monetize the core business by virtue of being the “standard”, because a huge user base acts as first line testers, and because multiple other companies have clear business requirements that push them to invest engineering resources in the system. For example, the makers of network devices usually have professional engineering teams building and testing their drivers, so they can sell hardware. This testing, by necessity, also tests the network stack. But if you are not familiar with Linux development, and don’t see all the commits from people with email addresses in major technology companies, you might get the impression that this free software appears magically. That network of motivations is much weaker for a special purpose component like SSL code and the quality requirement is also higher. But the same problem can arise even without open source – where pricing for proprietary components is too low. The market involves multiple niches where open source economics or industry pricing assumptions cannot produce the required level of component engineering quality. Discovering and navigating those gaps may be the difference between success and failure.

Also posted on Linkedin and FSMLabs 

Measuring NTPd in the Cloud II

 The problem with ntpd is the sudden performance degradations which can occur when conditions deviate from ‘ideal’. We have detailed these robustness issues of ntpd in prior work, including [17, 18]. Simply put, when path delay variability exceeds some threshold, which is a complex function of parameters, stability of the feedback control is lost, resulting in errors which can be large over small to very long periods. Recovery from such periods is also subject to long convergence times [...]

Now to the results. As expected, and from the very
first migration, ntpd exhibits extremely large errors (from
-1 to 27 s!) for periods exceeding 15 minutes (see zoom
in middle plot) and needs at least another hour to converge
to a reasonable error level.

from  Virtualize Everything but Time, Broomhead ,Cremean,Ridoux,Veitch

TimeKeeper is a “from ground up” implementation of  clock synchronization over both PTP and NTP and does not share NTPd limits.

 

Economics of Free Software

Fate has made me the “money guy” for OpenSSL so I’m going to talk about that for a bit.

As has been well reported in the news of late, the OpenSSL Software Foundation (OSF) is a legal entity created to hustle money in support of OpenSSL. By “hustle” I mean exactly that: raising revenue by any and all means[1]. OSF typically receives about US$2000 a year in outright donations and sells commercial software support contracts[2] and does both hourly rate and fixed price “work-for-hire” consulting as shown on the OSF web site. The media have noted that in the five years since it was created OSF has never taken in over $1 million in gross revenues annually.

Thanks to that publicity there has been an outpouring of grassroots support from the OpenSSL user community, roughly two hundred donations this past week[3] along with many messages of support and encouragement[4]. Most were for $5 or $10 and, judging from the E-mail addresses and names, were from all around the world. I haven’t finished entering all of them to get an exact total, but all those donations together come to about US$9,000.

OpenSSL uses a “give away code and charge for consulting” model that FSMLabs began with in 1999. We couldn’t make it work either.

 

At issue in Apple versus Samsung patent fight

A system and method causes a computer to detect and perform actions on structures identified in computer data. The system provides an analyzer server, an application program interface, a user interface and an action processor. The analyzer server receives from an application running concurrently data having recognizable structures, uses a pattern analysis unit, such as a parser or fast string search function, to detect structures in the data, and links relevant actions to the detected structures. The application program interface communicates with the application running concurrently, and transmits relevant information to the user interface. Thus, the user interface can present and enable selection of the detected structures, and upon selection of a detected structure, present the linked candidate actions. Upon selection of an action, the action processor performs the action on the detected structure