Friday, October 14, 2022

BEST OF THE BEST

 The Pioneers Of I.T And Computing


As self-proclaimed technophiles, we here at OMNITEKK believe it is our duty to give reverence to some of the founding fathers of computing, as they have helped pioneer some of the most prolific technological phenomena we know of.


And without further ado- THE BEST OF THE BEST PIONEERS OF COMPUTING AND TECHNOLOGY


Samuel Finley Breese Morse (1791-1872)

Our dearly beloved Samuel Morris developed the foundational tenets of coded language, namely Morse Code, the precursor to both the telegram and the internal hardware of the modern-day machines of new.


Morse Code utilized combinations of dots and dashes to represent alphabetical and alphanumerical codes of language, which would later become known in the computer world as a series of binary codes.


Alan M Turing (1912-1954)

Turing, who pioneered the concept of computations and computability, is best known as the man who cracked the German "ENIGMA" coding machines during world war II.


Alan is also known for writing two influential papers on the concept of what computers can I can't do, as well as designing the abstract model of what has been coined The Turing Machine.


Norbert Wiener (1894-1964)

Wiener, a mathematician from Harvard coined the term Cybernetics, hailed as the defining attributes of similarity between control and communication channels within the animal and machine (1948).


This concept laid the foundational descriptors of the relation between the biological processes in humans and animals relative to the theoretical mechanics of modern-day computers, robots and cobots.


John Borden (1908-1991) and Walter Brattan (1902-1987)

This dynamic duo, is best known for the construction of the amplifier - manufactured from slabs of germanium - the semiconductor material, which allows an electrical signal to be strengthened across communicatory channels.


This revolutionary invention, coined the most important invention of the 20th century, took us from the modern-day relay of old, to the transistors of new, which in today's microchip and graphical processing units, easily comprise hundreds of thousands, or even millions of units.


Jack Kilby (1923) and Robert Noyce (1927-1990)

Jack and Robert are famously revered as the co- inventors of the integrated circuit, commonly called the chip. (yeah folks, OMNITEKK means the microchip.


With the two major classes of microchips in use today being the  Complementary MOSFET CMOS ( pronounced Sea Moss), and  Transistor Transistor Logic chips or TTL (pronounced Tee Tee Ell ) integrated circuits.


These folks are certainly worthy of note, and we  especially hope our introduction to them has spawned further interest and exploration.


And until our next I.T. adventure my friends, OMNITEKK says be well.





Saturday, October 8, 2022

Oh No...The Big O

 Developing software is the means by which we solve some of our most pressing real world phenomena, through pathways of automated solutions.


 The goal then of the industrious developer, worthy of such a task, isn't simply to automate them, but to express and quantify the culmination of their interworkings, as concisely and to the point as possible.


And here my friends, in hope of quantitative aim and pristine precision, enlie those algorithm analysis techniques making their famed debut, in helping us understand the behavior of an algorithmic process, as its functional data inputs either grow or shrink, contingent on its implementation.


Further, since our most prized innovators and makers of all the world's fancy, have leanings towards nomenclature mania - the glorious world of computer science, has coined such a feat, BIG O ANALYSIS - the algorithmic quantification of a program completion time scenario, pertinent to all processing of functional inputs, and algorithm design trends.


Simply put, in the grand scheme of things, Big Oh algorithm analysis is our best means yet, of forecasting a runtime approximation of an algorithm, hence allowing the seasoned and savvy developer to determine, in comparison to all possible algorithmic solutions, the best suited for a particular task over some other.


This process requires that we not only know both the preliminary and consequential steps involved in an algorithm design process, but that we also know the implications of efficiency within the resources consumed within its process as well - birthing what's commonly known as an algorithmic rate of growth.


Rates of growth help us determine whether the problem size of a number of functional inputs, are either growing or shrinking, doubling or being reduced, linearly, exponentially or even quadratically by some quantitative value, as the application is run successively - or in other words, helps us determine the runtime bound curves for which the program is run.


And since Big O runtimes signify mathematical worst case runtime algorithm scenarios, their derivations can be deduced by mathematical notation, as a function of input size n.


For example, we might say that 

BIG O 0<=f(n)<=Of(g)


Symbolizing  that the O (Big O), upper bound, or worst-case scenario of function f(n), is greater than the function f(n) itself, having an initial program run sequence greater than 0...for wouldn't it prove a challenge to run an application with 0 inputs or even 0 times?... OMNITEKK affirms such a quandary.


Likewise, we can't always consider or prepare for worst case scenarios now can we?...


Not to worry friends,  our mathematical symbologists have devised a few other runtime quantifications to best explain run time bounds as well - namely OMEGA, signifying the best case or lower bound for an algorithm growth rate, and THETA, an expression denoting average rates of growth, or convergences somewhere between Big O and OMEGA, which may be asymptotically written as -


OMEGA Ω = Ω 0<=f(g(n))<=f(n)

THETA θ   =>   θ f(n)<=f(n)<=Of(n)


We say that THETA is both OMEGA and BIG O since it's run time rate of growth indicates an average subset rate of values within a set of upper bound and lower bound runtime values.


For instance, if O(f ) = ( ) and Ω(f) = (n) , then  θ  =  n², since n is also within the set of  , and higher bounds reflect a greater degree of efficiency in evaluating  runtime values. 


As such, we prove either an upper bound,  average bound,  or lower bound growth rate from a variation of possibilities, by finding at least one initial input value n and constant c that proves for any subsequent values greater than or equal to those initial values, the upper bound, average or lower bound scenario is true.

 

We might then prove  that Of(n) = O(n) using the following example


f(n) 100n+5=O(n) 


by simply applying the asymptotic formula


0<=f(n)<=f(O(n)) n0>=0, and c1>= 105

=>100n + 5<= 100n+5n=105n <=105n<=105n,

for all n>=1, and c>=105.


We are equipped to determine the bounds of our application run times.


Similar methods of deduction can be used to find both OMEGA and THETA as well.


Likewise, we tend to prove average bounds by proving both upper and lower bounds within an application, with a subsequent representation of average case growth as the worst case, since both the average and best possible bounds are encapsulated within it, and we are usually concerned with worse growth rates anyways...with the exception of amortization or amortized growth rates, that prove asymptotic bounds in clusters of program functionality as opposed to analyzing their characteristics disjointly.


It should be noted that such an expedition is of rarity, so we usually only deal with BIG O runtime in analyzing algorithm bounds, besides select specialty applications.


And there you have it folks OMNITEKK'S rendition of Oh No, The Big O.


We hope you've enjoyed it, and until our next it adventures my friends, OMNITEKK says be well. 



BEST OF THE BEST

Codes have always been deeply interwoven into the very fabric of our human existence.  We bear witness to them in our daily lives - as diffe...