Monday, January 9, 2023

BEST OF THE BEST

Codes have always been deeply interwoven into the very fabric of our human existence. 


We bear witness to them in our daily lives - as different dialects and languages, the codes we use to automate the world around us, and even in the codes we use to prevent others from the privy of sensitive information. 


For as long as there has been the need to communicate, codes have been the mechanism used to do so. 


So without further ado, OMNITEKK presents our Best Of The Best edition in CODES...

... Happy Coding.


CAESAR SHIFT -  Invented by Julius Caesar himself as a means of communication during periods of war and possible threat.


The Caesar Shift was introduced as a means of passing along confidential messages without the possibility of decoding.


It worked by shifting the letters within the alphabet a few paces, either left or right while using a primary key code as the radial mechanism within the shift.


HIEROGLYPHIC CODES -  Are visual depictions carved in stone, papyrus and other scribe material by ancient civilizations such as the Mayans and Egyptians, as a means of recording historical events within their tribes. 


Over the years, an array of archaeologists and scientists alike, have spent many a days in quest of decoding the meaning of hieroglyphic art.


MORSE CODE - Was used by our nation's Military Intelligence during the Civil War, as a means of communicating in secret with ally troops. 


The series of dashes and dots that make up the -"language",  allowed our defense squad to pass along sensitive information without the threat of enemy decoding.


PUBLIC KEY CRYPTOGRAPHY - Encoding and Decoding sensitive as well as insensitive information from prying eyes, is at the base of Public Key Encryption methods, such as RSA encryptions, that use public and private key encoding  and decoding of messages, as a means of keeping sensitive information safe.


LEONARDO DA VINCI'S MONA LISA - This famous painting is said to contain an array of discrete numbers and letters once serving as communication channels to Da Vinci's allies, for decoding to uncover sensitive information. 


And while our folks certainly haven't verified the veracity of such a claim, there's no denying the superbity that the Mona Lisa is a true work of art indeed.


We sure hope you've enjoyed our journey down CODES lane, and until our next I.T. adventure my friends, OMNITEKK says be well.





Saturday, December 24, 2022

FREE COURSES IN MACHINE LEARNING

In our aim of sharing our delights with you all, while continuing with our theme of Machine Learning and the Artificial Intelligence Revolution, OMNITEKK is enthused to imbue our fellow technophiles with free courses on the foundations of Machine Learning. 


And for all interested, we sure hope you enjoy the journey.


Until our next I.T. adventure my friends, OMNITEKK says be well.


Machine Learning Regression And Classification

https://www.coursera.org/learn/machine-learning


Introduction To Machine Learning

https://www.classcentral.com/course/youtube-introduction-to-machine-learning-dmitry-kobak-2020-21-46773


Statistical Machine Learning

https://www.classcentral.com/course/youtube-statistical-machine-learning-ulrike-von-luxburg-2020-46771


Mathematics For Machine Learning

https://www.classcentral.com/course/youtube-mathematics-for-machine-learning-ulrike-von-luxburg-2020-21-46772






Saturday, December 17, 2022

DATA SCIENCE

The means by which we categorize and classify data streams of structural inputs is at the heart of the Data Science revolution.


As such, Data Science and its explorations, help us to gain critical insights into statistical attributes pertaining to the fields of social sciences and human behavioral trends, patternistic archaeological inferences and deductions within the animal world, as well as helping us to discover breakthrough advances in healthcare and the pharmaceutical industry, in effort to support human longevity.


Likewise, there are a few key trends in the field that help data scientists working with larger data-sets organize and make efficient use of the information gathered, so as to advance the field, and utilize the  proficiencies gained within structured data models, to better understand the evolutionary progressions of our species and our world.

These Trends Include -


WEB SCRAPING AND DATA MINING 

in effort to develop statistical trends and deep attributional correlations between seemingly disjointed information. 


If we seek to deduce an assertion from within a particular attribute of interest, the larger the data set or volume of information we collect, the greater the chance of doing so becomes. 


Data mining and data collection is an essential area of data science in that it is the foundational source of information and digital data statistical deduction.


DATA ANALYSIS 

We come to know things based on the means by which we classify and define them.


Data analysis helps us accomplish this, through a series of hypothetical assertions and tests that allow scientists, mathematicians, programmers, statisticians and even philosophers alike, to determine the critical or prevalent attributes of a thing, so as to satisfy a specific or particular categorization and determination surrounding it. 


Data analysis includes gathering pertinent key information from a collection of data sources whether clustered, structured or unstructured to be used in such a determination.


INFERENTIAL INDUCTION AND DEDUCTION

Most of the information we collect is usually geared toward discovering an inferred hypothesis or deducing the refutation thereof. 


The process of drawing correlations between data stores, allow such inferences to be either proved or disproved upon gaining a better understanding of the trends that either affirm or disprove inferences of observation that serve as the foundational learning curves in discovering how certain attributes connect with critical theories, or the lack thereof.


MACHINE LEARNING 

Once critical analytical derivations have commenced, the definitive classification of either a model of regression or progression pertaining to a specific data sample, either serves as a dimensionality attribute of an existing data model or the foundational key attribute of the deep learning required to successfully classify a new one through attributional clustering.


In essence, machine learning is the attributional process of either creating, adjoining or declassifying data such that definitive categorizations and determinations of recognition might be made upon future interaction with it, based on inferential statistical and mathematical calculations of correlation or contrast.


DATA VISUALIZATION 

A key vestige in the world of Data Science lie in being able to convey technical or otherwise complex structural data concepts in easy to understand ways. 


This my friends, is where data visualization models come in to play.


Data visualization is the process of pictorially showcasing statistical attributional findings, so as to convey significance or minimizations pertaining to a specific concept, field, or area of study. 


Graphs, Data Decision Trees, Charts and even Videographical Data Displays, are all frequently used data visualization tools.


ACCUMULATIVE INSIGHTS OF ATTRIBUTIONAL EVOLUTION 

Once each of the stages in the data science process commences, the means by which attributional evolutionary variations across the classification spectrum is structured, should be continually measured or quantified, to record classification progression.


Here variances are traced, analyzed, and re-categorized where necessary, to maintain integrity in categorical attributional accuracy.


It's no secret that our world has vastly become a data-driven machine where the answer to some of our most pressing phenomena might be enclosed within the deep vestiges of digital data structures.


As such, Data Science field progressions once considered difficult and cumbersome to gauge attributional certainty in, now prove quite effortless, albeit tools such as machine learners and information access leverages of usable structured and unstructured data, that help us advance in understanding our world and the esoteric wonders within it...

...all thanks to improved Data Science measures.


We sure hope you've enjoyed our walk down Data Science Lane, and until our next I.T. adventure my friends, OMNITEKK says be well.


Saturday, December 10, 2022

CYBERCRIME

The INTERNET OF THINGS has made way for the invaluable vestiges of e-commerce, global connectedness, as well as productivity to increase demonstrably, as collaborative and convenience solutions are right at our fingertips.


Likewise, the evolution of such avenues have also allowed CYBER CRIMINALS to utilize these dais in effort to carry out harmful aim against unsuspecting machine users in undeniable fashion, spawning the exponential emergence of crime prevention measures.


CYBERCRIME includes any and all acts to steal, deceive, and terrorize both personal and organizational machine users alike, in effort to cause significant harm...

...to include -


Spoofing identifying machine information to garner trust and gain unauthorized access to someone's machine and machine resources.


Stealing personal information to use illicitly, such as banking information, passports and other identifying information to commit cyber crimes.


Using Spyware to acquire the 'Digital Fingerprint' of machine users for the purposes of illegally tracking internet activity.


Cyber Espionage to elicit brute force attacks by way of preventing access to resources by machine owners to ransom monies or other leverages.


Cyber Terrorism serving as machine owner harassing hubs by breaking valuable resources in effort to garner complicity or to prevent productivity and proficient resource utilization, through acts of cyber violences.


As the means by which our world becomes more connected enhances, so too does the methods criminals have at their disposal to engage in stealthy illegal activities.


Further, the array of faceless and nameless cyber criminals hiding behind machines while using them to steal, terrorize, and force modes of espionage onto unsuspecting targets have far greater aim in absconding their activities from the law, as internet traffic and ports of origin and destination aren't as easily tracked as we might believe, making it an arduous task for cyber crime prevention efforts to successfully identify criminal origins.


As such, one of the best things we can do as machine users is to seek the help of cybersecurity professionals upon observing resource compromisation when first recognized.


Likewise, OMNITEKK suggests keeping a dossier of any and all anomalous circumstances so as to help bring awareness of trends in what you may be facing.


And lastly, it is central to note that even the craftiest cyber criminals eventually get caught.


So be sure to continue your due diligence in helping maintain a safe and secure computing experience for us all.


And until our next I.T. adventure my friends, OMNITEKK says be well.












 

 

Saturday, December 3, 2022

LEARNING MACHINES

Computer programs, once sequenced as the functional output of human directives manifesting lightning speed result-sets, have shape-shifted in their nature, as advanced machine processing methods encompass the technological ebbs and flow of machine learning.


The machines of new, given the desired result-set of a given function, now have the "intelligence" to design the relative data inputs themselves to satisfy it.


ARTIFICIAL INTELLIGENCE and MACHINE LEARNING have made way not only for immense productivity levels in our workflows to commence, but have likewise spawned a new era of computing, with the machine itself now having the ability to transform its own instruction sets to accomplish imperceptible successive aim in data mining models, such that our means of both organizing and relating collections of seemingly disjointed information might be used in meaningful ways as we bear witness to the 5th generation of digital networking.


As such, OMNITEKK found a few phenomenal videos to help you understand the key concepts within computational autonomy and the world of MACHINE LEARNING.


Enjoy!


And until our next I.T. adventure my friends, OMNITEKK says be well.










Saturday, November 26, 2022

RISE OF THE MACHINE

It's no secret that within just a short span, the boom of brilliance on display from some of our most prized developers, have allowed us to witness leaps and bounds within the TECH INDUSTRY.

 

Likewise, successive automations, from the simple, to those of immense intricacy and complexity, have dawned a new era of engineering ambitions that have far surpassed what most critics deemed possible years ago.


And as we continue to improve upon these themes, there are certainly a few questions to be asked - such as how might they affect the future of human employability in the field.


This rings especially true, as the evolution of autonomous developments, such as NEURAL NETWORKS and advanced DATA MINING trends take shape, spawning a surge in the emergence of machine-automata and refactoring methods that now make it possible to subvert developer efforts in favor of self-modifiable codes instead of requiring human programming efforts within key production processes.


If designs then, such as the GOOGLE WORLD BRAIN and other ARTIFICIAL INTELLIGENCE ambitions, have the ability to enhance productivity efforts in such a way that the need for human-machine interaction declines, the developer, programmer and software architect's efforts and future necessity within the field also declines, with the very real threat of programmer obsoletion looming.


The emergence of collective data mining practices allow data sharing in more connected pathways, through an increase in collaborative interfaces, but they also pose the very real threat of phasing out human efforts as well.


We are now witnessing the residual effects of such industry trends, as the I.T. Giants TWITTER, GOOGLE, and MICROSOFT, have begun widespread layoffs within their family of workers, with our predictions forecasting other tech giants to follow in the near future.


Likewise, as aspiring S.T.E.M FIELD contributors continue to grow in both education and industry ambition, OMNITEKK suggests these exploration areas to increase your chances of employability within the market.


DATA SECURITY

Security breaches of sensitive data, whether by malicious intent or harmless virtue, pose significant threats to data security and privacy, and must constantly be maintained and supported, such that the viability of tech implementations along with the safety and security of sensitive data maintain their integrity.


Hence, there shall always be a need for securing information and maintaining data obsfuscation trends within the industry.


HARDWARE DESIGN AND MAINTENANCE

With the emergence of self-learning machines, the need for workers who design and build the hardware to ensure the successful integration of both hardware and software, shall prove immense necessity within development processes. 


Hardware engineers and hardware maintenance workers should find sustainability within the field, procuring their rightful place amongst the dwindling tech jobs on the market, as the need for software designers is on the decline.


BIG DATA

Of all the types of developers on the market, a thorough expertise in understanding DATA STRUCTURES and DATA COLLECTION and ORGANIZATION TRENDS, that foster the utilization of otherwise disconnected information to be used in meaningful applications, make way for what we consider the big boom of employability and productivity trends within the market.


CREATIVE VISION AND TECH INGENUITY

The pioneer has always proven a very necessary commodity in the tech arena, and there has never been a greater time to evoke one's own creativity flows, so as to find lacking vestiges in the field, to apply ingenuity within those processes, just as the industry titans of old have done.


As such, the tech visionary is always of good use. 


SEE A NEED FILL A NEED.


INDUSTRY SPECIALIZATION

The era of the JACK OF ALL TRADES, MASTER OF NONE is long gone, and the I.T. arena now calls for the highly specialized tech connoisseur who is well vested in know-how within methods, language and practicality of use.


And with the latest tech trends in I.T. seeming to employ the culmination of MACHINE -AUTONOMY - from self-driving cars, to self-flying aircraft and even to self-coding codes - the culmination of our relevance in the field should be BUILD, SECURE and MAINTAIN.


May these words help you on your journey to OMNITEKK GREATNESS as you master your craft.


And until our next I.T. adventure my friends, OMNITEKK says be well.






Sunday, November 20, 2022

NUMBER CONVERSIONS

A great deal of the tasks the developers of new engage in, require on some level, the structural converting from one number base to some other, as our natural counting bases differ from those coveted 1's and 0's, amongst others, used in our machines, to dish out the glorious codes serving to simplify both our workflows, and the means by which we interact with and automate the world around us.


And without further ado, OMNITEKK presents our rendition of NUMBER CONVERSIONS, to help you along the way to becoming the I.T. rock stars we know you all have the proclivity to be.


BASE 10 TO BINARY

While we each of us have 10 fingers and 10 toes - well most of us anyway... giving ode to the representation and relevance of our base 10 numbering system, the machines we use in our processing tasks do not.


Fact is, the innerworkings of the human readable inputs and outputs of machine processes we recognize, prove vastly different within the machines themselves, consisting only of the simplicity of low to high voltage representations of data sequences, signified by either a 1, to show a HIGH VOLTAGE or ON position of a RELAY, TRANSISTOR or SWITCH along with 0, indicating LOW VOLTAGES or OFF positions.


These are the foundational attributes of what we come to know of as the BASE 2, or BINARY numbering system.


So instead of trekking between 10 numbers to represent the culmination of data or information, our machines accomplish almost whimsical feats, by combining and grouping, masking and converting just two numbers, namely 1's and 0's. 


It's all quite fantastical if you consider it.


Likewise, converting from BASE 10, our natural counting BASE, to BASE 2, the computer's natural counting BASE, consists of multiplication and division to represent clusters of binary information.


Here's how...


Let us take the BASE 10 DECIMAL number 1 2 3 4 5 for instance. 


We simply divide the number by 2, while keeping only the remainder, in reverse order, as our BINARY number conversion result.


So, 1 2 3 4 5 becomes 0011 0000 0011 1001 in BINARY.




Likewise, to convert a BINARY number to DECIMAL, simply utilize the inverse operation of multiplying the BINARY number by 2 with increased exponentiation ranging from 0 to the BINARY number length minus 1, to retrieve its BASE 10 number equivalent, only calculating the addition of the degree values where the result is 1 or ON.


00(1*8192)+(1*5096)+  0000  00(1*32)+(1*16)+  (1* 8)+00(1*1)


BASE 8 TO BINARY

Suppose instead of having 10 fingers and toes, we instead have only 8.


In this instance, instead of counting from 0 to 9 before our number increases in degree, we only count from 0 to 7.


So counting 20 in our new number base looks like 0, 1, 2, 3, 4 ,5, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 20 and so on.


Thing is, we still have to represent a BASE 2 number. 


To do so, we simply utilize the same process of multiplication and division from our initial BASE to our desired BASE.


 In this instance, to retrieve the BINARY equivalent of the OCTAL value 30071, we simply process the OCTAL number by its individual numbers, and divide each of those number values by 2, discarding all but the remainder values in reverse to retrieve our BINARY equivalent in groups of 3.


So 3  0  0  7  1 in OCTAL becomes 011 000 000 111 001 in our BINARY conversion.




Likewise, to retrieve the OCTAL equivalent from our BINARY value, we simply perform the inverse  operation on our binary groups of 3, and convert each number group to their OCTAL equivalent.


0(1*2)+(1*1)   000   000   (1*4)+(1*2)+(1*1)  00(1*1)


BASE 16 TO BINARY

Now, keeping up with convention, suppose instead of having 10 fingers and toes or even eight of them, we instead have 16.


While such a conception might seem both foreign and of unpleasant inconvenience, this is the measure of our hexadecimal (hex for short), or BASE 16 numbering system - introducing instead of only the DECIMAL numbers 0 through 9, the culmination of the alphabetical letters A to F also, for total of 16 number counts before increasing the degree of the actual number.


For instance, instead of 0 1 2 3 4 5 6 7 8 9, we now have 0 1 2 3 4 5 6 7 8 9 A B C D E and F, with A having the equivalence of a BASE 10 10, B of 11, C of 12 and so on.


To convert a HEXADECIMAL number to BINARY, we simply divide the number in it's individual number values by its BASE conversion factor of 2, grouping the result by groups of 4.


So the HEXADECIMAL value ABCD  becomes 1010 1011 1100 1101.





To compute its inverse, we simply compute each group of BINARY digits to their HEX equivalent, multiplying each BINARY digit by 2 in increasing exponent order from 0, only keeping the ON or 1 values in reverse as our result.

(1*8)+0(1*2)0   (1*8)+0(1*2)+(1*1)   (1*8)+(1*4)00   (1*8)+(1*4)+0(1*1)


And that's it folks, NUMBER CONVERSIONS served!


Until our next I T. adventure my friends...OMNITEKK says be well.



Friday, October 14, 2022

BEST OF THE BEST

 The Pioneers Of I.T And Computing


As self-proclaimed technophiles, we here at OMNITEKK believe it is our duty to give reverence to some of the founding fathers of computing, as they have helped pioneer some of the most prolific technological phenomena we know of.


And without further ado- THE BEST OF THE BEST PIONEERS OF COMPUTING AND TECHNOLOGY


Samuel Finley Breese Morse (1791-1872)

Our dearly beloved Samuel Morris developed the foundational tenets of coded language, namely Morse Code, the precursor to both the telegram and the internal hardware of the modern-day machines of new.


Morse Code utilized combinations of dots and dashes to represent alphabetical and alphanumerical codes of language, which would later become known in the computer world as a series of binary codes.


Alan M Turing (1912-1954)

Turing, who pioneered the concept of computations and computability, is best known as the man who cracked the German "ENIGMA" coding machines during world war II.


Alan is also known for writing two influential papers on the concept of what computers can I can't do, as well as designing the abstract model of what has been coined The Turing Machine.


Norbert Wiener (1894-1964)

Wiener, a mathematician from Harvard coined the term Cybernetics, hailed as the defining attributes of similarity between control and communication channels within the animal and machine (1948).


This concept laid the foundational descriptors of the relation between the biological processes in humans and animals relative to the theoretical mechanics of modern-day computers, robots and cobots.


John Borden (1908-1991) and Walter Brattan (1902-1987)

This dynamic duo, is best known for the construction of the amplifier - manufactured from slabs of germanium - the semiconductor material, which allows an electrical signal to be strengthened across communicatory channels.


This revolutionary invention, coined the most important invention of the 20th century, took us from the modern-day relay of old, to the transistors of new, which in today's microchip and graphical processing units, easily comprise hundreds of thousands, or even millions of units.


Jack Kilby (1923) and Robert Noyce (1927-1990)

Jack and Robert are famously revered as the co- inventors of the integrated circuit, commonly called the chip. (yeah folks, OMNITEKK means the microchip.


With the two major classes of microchips in use today being the  Complementary MOSFET CMOS ( pronounced Sea Moss), and  Transistor Transistor Logic chips or TTL (pronounced Tee Tee Ell ) integrated circuits.


These folks are certainly worthy of note, and we  especially hope our introduction to them has spawned further interest and exploration.


And until our next I.T. adventure my friends, OMNITEKK says be well.





Saturday, October 8, 2022

Oh No...The Big O

 Developing software is the means by which we solve some of our most pressing real world phenomena, through pathways of automated solutions.


 The goal then of the industrious developer, worthy of such a task, isn't simply to automate them, but to express and quantify the culmination of their interworkings, as concisely and to the point as possible.


And here my friends, in hope of quantitative aim and pristine precision, enlie those algorithm analysis techniques making their famed debut, in helping us understand the behavior of an algorithmic process, as its functional data inputs either grow or shrink, contingent on its implementation.


Further, since our most prized innovators and makers of all the world's fancy, have leanings towards nomenclature mania - the glorious world of computer science, has coined such a feat, BIG O ANALYSIS - the algorithmic quantification of a program completion time scenario, pertinent to all processing of functional inputs, and algorithm design trends.


Simply put, in the grand scheme of things, Big Oh algorithm analysis is our best means yet, of forecasting a runtime approximation of an algorithm, hence allowing the seasoned and savvy developer to determine, in comparison to all possible algorithmic solutions, the best suited for a particular task over some other.


This process requires that we not only know both the preliminary and consequential steps involved in an algorithm design process, but that we also know the implications of efficiency within the resources consumed within its process as well - birthing what's commonly known as an algorithmic rate of growth.


Rates of growth help us determine whether the problem size of a number of functional inputs, are either growing or shrinking, doubling or being reduced, linearly, exponentially or even quadratically by some quantitative value, as the application is run successively - or in other words, helps us determine the runtime bound curves for which the program is run.


And since Big O runtimes signify mathematical worst case runtime algorithm scenarios, their derivations can be deduced by mathematical notation, as a function of input size n.


For example, we might say that 

BIG O 0<=f(n)<=Of(g)


Symbolizing  that the O (Big O), upper bound, or worst-case scenario of function f(n), is greater than the function f(n) itself, having an initial program run sequence greater than 0...for wouldn't it prove a challenge to run an application with 0 inputs or even 0 times?... OMNITEKK affirms such a quandary.


Likewise, we can't always consider or prepare for worst case scenarios now can we?...


Not to worry friends,  our mathematical symbologists have devised a few other runtime quantifications to best explain run time bounds as well - namely OMEGA, signifying the best case or lower bound for an algorithm growth rate, and THETA, an expression denoting average rates of growth, or convergences somewhere between Big O and OMEGA, which may be asymptotically written as -


OMEGA Ω = Ω 0<=f(g(n))<=f(n)

THETA θ   =>   θ f(n)<=f(n)<=Of(n)


We say that THETA is both OMEGA and BIG O since it's run time rate of growth indicates an average subset rate of values within a set of upper bound and lower bound runtime values.


For instance, if O(f ) = ( ) and Ω(f) = (n) , then  θ  =  n², since n is also within the set of  , and higher bounds reflect a greater degree of efficiency in evaluating  runtime values. 


As such, we prove either an upper bound,  average bound,  or lower bound growth rate from a variation of possibilities, by finding at least one initial input value n and constant c that proves for any subsequent values greater than or equal to those initial values, the upper bound, average or lower bound scenario is true.

 

We might then prove  that Of(n) = O(n) using the following example


f(n) 100n+5=O(n) 


by simply applying the asymptotic formula


0<=f(n)<=f(O(n)) n0>=0, and c1>= 105

=>100n + 5<= 100n+5n=105n <=105n<=105n,

for all n>=1, and c>=105.


We are equipped to determine the bounds of our application run times.


Similar methods of deduction can be used to find both OMEGA and THETA as well.


Likewise, we tend to prove average bounds by proving both upper and lower bounds within an application, with a subsequent representation of average case growth as the worst case, since both the average and best possible bounds are encapsulated within it, and we are usually concerned with worse growth rates anyways...with the exception of amortization or amortized growth rates, that prove asymptotic bounds in clusters of program functionality as opposed to analyzing their characteristics disjointly.


It should be noted that such an expedition is of rarity, so we usually only deal with BIG O runtime in analyzing algorithm bounds, besides select specialty applications.


And there you have it folks OMNITEKK'S rendition of Oh No, The Big O.


We hope you've enjoyed it, and until our next it adventures my friends, OMNITEKK says be well. 



Friday, September 9, 2022

POINTERS AND HANDLES

Where things are and how to access them, is of the essence in both the practical machine world of programming and technology, as well as within conceptualization of I.T. methodologies.


As such, HANDLES and POINTER addressing - not those Turkish delights stuffed inside your Thanksgiving bird, but rather, the method by which programmers utilize structural mnemonics within machines to allow for the assignment, accessibility, and processing of data to be performed as painlessly as possible, render some of our most prized automations.


So OMNITEKK, just what are POINTERS and HANDLES, and how do they work?


So glad you asked my friend... Let's "undress" such a concept shall we.


In the world of both functional and object oriented programming, POINTERS or object addresses, serve as the indexing means of accessing and computing or processing data values, both primitive, and user - defined, within your development environment.


These objects, or values might be functions, data types, structures, as well as volatile instances of file addresses declared, both at program run time or compilation.


So essentially, just as our neighbor's or family members know us by our name, POINTERS act as both explicit and implicit naming conventions, adorned with accessing and communicating with data objects within an application.


Likewise, each time an application runs, while the internal or programmer designated naming conventions of objects  - POINTERS also, appear the same, the machine compilations of programmer defined names, are assigned new or different addresses, giving credence to the commonality of POINTERS being coined DYNAMIC ADDRESSING MODES.


Now on to the fun stuff - handler or HANDLE addressing -  and no, OMNITEKK isn't referring to the ones from your run of the mill Friday night Martin Scorsese film, although the effects of such an I.T. concept prove oddly similar...


...however we digress, so let us continue shall we?


The main difference between POINTER addressing and HANDLES, is that HANDLES are usually static addressing methods, designed to access objects whose positional location within machines, or on storage medium doesn't necessarily prove itinerant, such as with files stored on tapes, hard drives, or disks, along with all other machine hardware storing information whose locational attributes prove a higher level of finality as opposed to volatility.


For instance, the addressing schemes of select data functions and variables within your application are dynamic, as each run iteration of your program assigns the internal program data structures new addresses, while the addresses of your actual stored program itself, is accessed by a HANDLE, as - while the attributes of the program, such as program file size, might grow or shrink as the program is modified, the housed file location of the application on disk remains the same, thus giving credence to the notion of a HANDLE accessor, versus a POINTER accessor.


A few take aways of note on POINTERS and HANDLES in helping you differentiate between the two is -


POINTERS are known as dynamic accessor types, while HANDLES are known as static types.


POINTERS usually reference things like data whose objects prove volatile, such as the assignment of data value objects which are destroyed upon program end, like structure addresses, and function addresses, while HANDLES typically reference static objects, such as data stored on tapes, drives, or disks, such as those rendered by your file system's page tables, or some other internal addressing scheme to access objects or data members where their accessor means prove higher permanence.


And there you have it folks, OMNITEKK'S rendition of the essential data access object definitions of I.T's glorious POINTERS and HANDLES.


And until our next I.T. adventure my friends, OMNITEKK says be well.




Friday, September 2, 2022

NOISE

 While OMNITEKK enjoys the sound of all things techy, there are a few sounds that pose significant conundrum or anomaly within the I.T. arena, especially where file processing and communicatory applications, as well as audio and telemetrical signal processing is of note.


The folks here bent on veracity of  nomenclature, like to call  such a phenomenon, NOISE -  a generalized term for anomalous or otherwise scrupulous interceptions within a machine's electronic data capture, storage, transmission processing, or conversion operations.


NOISE within most basic data processing operations prove significant risk towards both the predictability and usefulness of a machine to transport reliable information from hosts to target machines, terminals or processes.


And while there are several types of radio interferences or NOISES, a few of the most familiar ones are -


ADDITIVE NOISE - OR GAUSSIAN NOISE signals that obscure and even mimic the original or designed signal to be processed.


WHITE NOISE - A random signal or data packet having a variation of frequency at specified signal strength, such as in acoustic engineering telecommunications and statistical forecasting trends.


BLACK NOISE -  A significant random signal interference, like white noise, with the exception of a variation in signal strength within a specified frequency.


BURST NOISE - A type of electronic noise frequently used in semiconductors and microchips, along with gate oxides, and is also known as Random Telegraph Noise or Random Telegraph Signals as well as Popcorn Noise.


Burst Noise is commonly invoked by the trapping and release of charge or voltage carriers within relays or transistors, signifying defects within the hardware components within the manufacturing process, as opposed to the software processes, such as in grey noise phenomena.


So if your machine isn't functioning as it should, the culprit might just be NOISE, causing interference within your signal processing efforts.


In fact, almost all anomalous behavior within the machines of new where functioning is concerned, can be considered NOISE of some sort.


And there you have it folks, a new word to add to your library of technomenclature


YOU'RE WELCOME.


And until our next it Adventure my friends, OMNITEKK says Be well.



BEST OF THE BEST

Codes have always been deeply interwoven into the very fabric of our human existence.  We bear witness to them in our daily lives - as diffe...