Touchpad HP

Touchpad HP (Photo credit: Wikipedia)

Touchscreen and VT[5]

Touchscreen and VT[5] (Photo credit:

the whole idea of an OS is somehow questioned nowadays.
lots of new devices, each with a full-fledged processor and storage.
however, input/output and usage-pattern differ greatly.
therefore an OS must be much more flexible than ever before.
so far Operating Systems were programmed in the style of OOP.
this approach sadly is lacking the needed flexibility.

for example the mouse-input:
once upon a time the mouse-driver had only 2 devices to care about.
the actual mouse with all of its buttons and wheels, and a trackball with the same gizmos.
then someone invented capacitive touch-screens/-pads which allow for multi-touch.
now even a camera can simulate mouse-input.
abstractly seen a capacitive touchpad is not an object of type “mouse”, not even a vector  of “mice”.
but it has similarities, which hopefully translate into code-reuse for the OS.
technically seen capacitive touch-screen/-pad just measures min-distance from the sensors.
from the pov of the user, those touch-devices are much less precise than mouse.
used with a finger the touched surface is quite large.
contrary to a mouse this gives several coordinates which wont match.
to summarize, a touchscreen is a dimension-changing vector of mice without buttons.
i.e. take away buttons from the mouse, and take away fixed size from the vector.
but that’s only the hardware-aspect of this thing! this is what a user-interface needs to access.
the user-interface must translate those appearing and disappearing coordinates into an ui-aspect.
ui usually is full of buttons and other widgets which react to such a device.
a program wishes to learn if the user wants to know more about a widget, wants to configure it, or wants to use it.
with a mouse additional info usually is obtained by leaving the cursor motionless there.
using the widget is accomplished by clicking. and options are available by right-click.
maybe in future this additional-info-request could get messaged by a camera observing eye-movement.
either way, one just can’t generally claim that any of those 3 actions can be mapped to other devices 1:1.

for a mouse hovering over an object is unlikely to happen during activity.
similarly buttons wont get pressed on their own.
a touchpad could simulate mouse-press by double-touching, tapping.
but this will happen randomly on its own, even though with low likelihood.
press and stay-motionless is easy to do with a mouse.
with a touchpad a finger being motionless is quite difficult to do quickly.
in whole, work with the touchpad is slowed down.
dividing the touchpad into multiple zones is more useful.
for example if one regularly needs to zoom into pictures.
touchscreen is even more challenging:
intuitively we expect that as long as our finger partly covers a button it should be pressed.
so even touching isn’t supposed to translate into an emulated mouse-click.
instead the coordinates of the click should be adjusted for the way widgets respond to it.
i.e. a plain-text widget wont respond, and so the click must go to the button instead.
fingers are so big, quite frequently 2 or more widgets will be partly covered by their touch.
unless the buttons are as huge as on the iPad, but that’s a waste of space.

each input device needs its very own input-interface. a user-space program isn’t supposed to do that!
so a good interface is needed which translates user-intent into program-activity.
re-using the old mouse-interface, exposing raw mouse-coordinates and buttons, that’s the wrong way.
basically there is this data-structure, with raw input-data. but its behaviour should differ depending on context.
this idea isn’t new, aspect-oriented programming is all about changing contexts.
what’s new is that now the OS is affected too, at least the libs which make ui-programming easier.

additionally to flexibility an OS needs to be standardized in its drivers. for each hardware-type a standard.
there are lots of various sensors around, some sensors are combinations of others.
in terms of hardware it seems quite impossible to get common grounds for standards.
most prominently there’s nvidia and ati each with its own assembler-language for their devices.
again there is the problem of being unable to map one hardware to the other.
this results in bad code-reuse in the user-space programs.
but if there were a standard of things achievable with both, common ground, this would be less a problem.
mind you, there still would be things achievable only by one of the two. programs would want to access these too.

another example are filesystems.
it is quite standard to have meta-info (like ACL) about files stored in the linux-filesystem.
but most filesystems don’t support this feature natively.
some do (for example NTFS), but no support is built into the driver of your favourite OS.
shouldn’t those drivers be forced to support such standards?
shouldn’t at least some other means of achieving this capability be offered?
eventually the capability is natively supported. be it by programming or by the user copying to another system.
so a conversion-program must be automatically run whenever native support is detected.
similarly during copying or archiving the fallback-method must be used.

at least in windows drivers are written by professional programmers, each with their own ideas.
this way additional programs are included which extend the driver’s functionality.
people then buy hardware from their favourite company, because such programs only work with that hardware.
this just needs one minor change to be fit for the future:
users must collect all the features offered by such programs and rate them.
if this is done more or less centrally then companies have a cheat-sheet for what they must offer.
that’s how standards get born in reality. and that’s how I imagine the future’s os to be standardized.
this kind of standardization however requires a more flexible notion of library.
the goal is that if a driver doesn’t fulfill a “standard”, others might write the missing parts.

for example a mouse-driver might need to take into account that the user might have a 2nd mouse.
the user might want to choose which to use, or maybe both at the same time. also energy-conserving must be handled.
nowadays such additions are performed by encapsulating that driver into another one.
not a good idea in terms of speed and stability. and in some cases it might not be a useful solution.

another indication that the concept of libraries must be re-designed, is seen in multicore-processors.
suppose the computer has 8 cores, the use has 6 programs which require cpu-cycles. what about the other 2?
suppose one of those programs is decoding music or video. definitely an activity that could make use of them.
on the other hand, using one single processor is more effective — provided the other processors are busy.
in current OS this means there must be 2 programs, one for single and one for multi processor usage.
however, most parts of those two programs would be identical. just spread in too small shards to put into functions.
maybe this problem requires also some changes in the processors, but libs definitely must become self-altering.
the lib must have info on where changes can be made, so another lib can perform them.
programming therefore must contain more of the programmer’s intentions.

for multiprocessor this means that commands aren’t in a strict sequence.
some commands can be executed in parallel. most of the time the compiler will know.
this isn’t enough info though. sometimes a sequence of commands have a goal achievable by another sequence too.
for this the commands must be somehow uniquely identified or something, so future algorithms can kick in.

code-injection is the goal. the user should just install a new library, and existing libs will be altered.
let’s say stdlibc++ provides default behaviour, and new libs optimize that.
not against a lib the executable is linked, it’s linked against an interface, a whole directory of libs and links to libs.
the driver too must then be linked against a directory of libs and fill the interface there with functions.

at least that’s my vision of how future OS might look like.
if you have any ideas please write a comment.
this article however is not about Desktop Environments per se.
but if you code a DE, keep in mind I warned you it might become incompatible…