Computers and Internet

What worries me about ChatGPT

As basically everything that happens nowadays (Including technology), “stuff” (new ones) tends to have its one or two weeks of fame where everybody is talking about it; hundreds of “gurus” magically appears from nowhere, and everybody panics thinking that now, finally, we are doomed as this new technology will take over and destroy our jobs.

So, now the maelstrom has passed and we can talk more seriously (and calmly) about ChatGPT. Although, I don’t necessary pretend to talk about it in the same way that others (the gurus) have been talking, giving shallow advice just to capitalize the fast food consumerism style that also applies to tech, but I want to put focus on the “doomed” part.

No, we are not doomed!

First of all, it has been demonstrated countless times that ChatGPT answers tends to don’t be that accurate as some promoters usually asseverate. For sure, if you ask about absolutely well-known and documented information, you might get what you are looking for, but if you go a little bit deeper, ChatGPT quickly start to fabricate information, introducing incorrections or simply gross mistakes.

I am not interested on analyze why ChatGPT does that or how AIs works. That’s is for a different post. But to what I am interested is to understand that the reason we are not doomed is exactly this one: ChatGPT buoy on the surface of the matter, and to detect this you MUST know about the topic.

This makes you, especially developers with years of experience, much more necessary than ever. Your knowledge overpass what this type of AI could produce. And I can prove this with a really basic example:

This was a simple prompt: Create a function to navigate a binary tree. The answer seems legit, right? Well, yes and no. Yes, this will do navigate the binary tree, but will throw a call-stack error. So, as developer, you spotted that, because you know that and you will never produce code with this flaw.

Because you know this you ask the AI about the error and then:

Now the code might look better but as I said, you KNEW it beforehand, so you corrected the AI.

But, what all of this has to do with the “we are doomed“? Well, the “gurus” and promoters rushed with this type of messages: This AI will change everything! Now we will be able to write applications in minutes… and many other nonsensical hysterias that, for people that don’t work with code or is not involved with technologies or do not understand how software is made, resonated as a cutting cost silver bullet.

Now imagine copying and pasting the first code and find that your application fails once it is deployed into production?

Anyhow, thinking that we are doomed, that we are not need it anymore is a basic way of thinking. Wishful thinking may I say, that sadly it is more predominant in our industry than we could imagine. I could say that the ones really doomed are those that believe this type of tools will help them to produce applications in seconds and have less developers in their teams.

The problem is the self-called gurus

Yes, the problem are the “gurus” but moreover the tech-fashion way of thinking. The need to be on top of the tech market constantly showing something new and “revolutionary”. All of this has a root on how companies need to show-off, but that is also a different discussion. What it is part of this trend of craziness is the need of those gurus to attract likes and comments, to capitalize trends: Cryptos, NFTs, blockchain, AI, ChatGPT.

And sadly, we fall on their nets.

We need to learn on silence the noise coming from those promoters and be able of giving us enough time to explore the solutions by our own. To don’t rush on embrace something just because is new. We will not be missing anything important if we take a little bit more time, actually, we can gain knowledge and be better prepared if that new tech is the right fit for our developments.

Anyhow, this is “fashion” trend is not just something that happened with ChatGPT. It has been like this almost always. There is another post of mine from several years ago in which I was discussing why DataSets (.Net) were bad and how they become a trend at that time harming applications. And even so, regardless it obvious problems, developers adopted it just because was the trending tech to use.

So, don’t rush, analyze what you are incorporating into your toolbox, be sceptic with those gurus, learn before use something!

Summary

I’ve been playing with ChatGPT for months and I have found errors not only on tech topics. In any case, I would like to show you another interesting prompt that I asked.

For this case I asked about jsGFwk. If you have been following my blog you already know that I am the creator of this framework. So, it was interesting to find as a first answer basically what it is in the read me file of the framework repository.

But what happened when I asked about who created this framework?

Yes, the date (2012) it is more or less accurate but the rest of the information is false. So, I must be missing something. What happen if we ask a little bit more?

Again, false and fake information. But ChatGPT might be seeing something that we are unable to. So, I tried to search the name of the “creator” of jsGFwk and see what Google brings us back.

And yes, nothing!

In any case, this post is not a critique to ChatGPT (I suppose this is already clear), but to us, as technology specialists and our role when adopting and promoting a particular technology. ChatGPT attracted huge attention and I am referring to it in this post, but this, as I already said, have happened countless times in the past with many other technologies that today have become “de facto” tools which actually, aren’t that good or suitable for all the use cases out there.

Dig deeper before adopt a tech!

Advertisement
Standard
C#, Divulgación, HTML5, JavaScript

Notebooks for .Net developers with Polyglot Notebooks

This is not totally new, and perhaps you might already know about Polyglot Notebooks. Well, actually, .Net Interactive Notebooks. A type of notebook (Like Jupyter Notebook) but specialized for .Net enthusiasts, which doesn’t mean that only .Net users can use, on the contraire, means that adds up more languages (.Net stack in particular) into the already existent options.

A couple of days ago .Net Interactive Notebooks changed its name to Polyglot Notebooks, honoring better what this tool is about: Multiple languages communicating between them in a classic notebook format.

So, with all that in mind, I was looking to test this capacity of connecting multiple languages and platforms, passing data from one point to the other to finalize with the rendering of that information. And what could be better that have a database, read that information using C#, passing some of that to JavaScript and manipulating the DOM (Of the render engine) to display it with HTML tags?

First a class diagram!

So, we should start our development with some diagrams. A way to express, define and refine our ideas. One of those things that makes Polyglot Notebooks interesting is its integration with not only programming languages, but other extensions, tools and, also, scripting commands such as PowerShell.

For this case, diagrams, Polyglot uses Mermaid, a quite powerful and interesting text-based tool that can create not only class diagrams but a bunch of others. For sure, there are plenty of these tools in the market, but Mermaid is part of Polyglot, so it is welcomed.

The Database

Our next step will be to have a database in place, with the already defined (At least in a class diagram) table and some data to read from. I have already blogged about how to set a SQL Server database in a Docker container. If you don’t have a database, I recommend do a reading on that article.

So, once we have our database in place, we need to import the Nuget package that will allow us to connect to that SQL Server database and open a connection.

As we can see, the first section uses “#r” (A magic command) to import the SqlServer Nuget package, and the second section attempts to connect to our already created database.

There are different options when connecting to the database but the one that you, for sure, will love is “–create-dbcontext” which uses Entity Framework to create the C# models from the database, assemble a database context and leaving everything ready for you to use Linq when querying the database.

Bringing data

Now we have our database, connection and context. Next step: Read the data!

Any C# developer will understand the previous lines. We take the first record from our table, store it into a variable, and then create two single variables from that (We will understand this in the next step).

More languages

So far, we have used a couple of languages. Mostly C# for sure, but at least, different context and domains. But what about the “Polyglot” part on all of this? What about if we would like to use those two variables from JavaScript? Say no more!

As we can see, a single block can define multiple languages. For us, we have a single block that combines HTML and JavaScript, all in one. The HTML section is really simple. We are using that tag just to render some text but using CSS to make it noticeable.

The JavaScript section instead, it is a little bit more interesting. First, we import those two variables that we defined in our previous block using the “#!share” magic command. Internally Polyglot will handle conversion types and interoperability. Once all that magic is completed, we proceed to write our standard JavaScript code. Capturing the HTML tag and injecting the values that we have inside of our two variables.

Polyglot has some limitations, especially with complex types which makes that moving classes across some languages causes errors. Because of that, I simplified the example sending individual primitive types instead of more complex elements.

Regardless some other limitations, Polyglot Notebooks is a great tool that is constantly growing and incorporating more options, and this makes it valuable to having it in our “toolset”.

Standard
Computers and Internet, Inteligencia Artificial

AI and ML 101

Months ago, I wrote an article after a lecture for non-technical people that I gave about Computer Vision, Machine Learning and Artificial Intelligence. As the title of this post states, the article and the lecture were a very introductory approach to these topics. The lecture, in contraposition of the article, was filled with practical examples and cannot be reflected here, in words. Also, due both, the article and the lecture, was intended for non-technical people, many terms and concepts might be oversimplified meaning that, if you are an expert on this matter, you might want to raise your finger in the air and say something such as: That is not totally correct, I would say my good sir! Which it is totally valid, but please, remember this is intended for people that just want to start, that have no knowledge and flooding them with deeper and complex concepts and terminology might cause problems. Regardless the previous excuses, if you still feel the urge of pointing out any correction, please do it in the comments. It would be welcomed.

So, without further ado, here is the article:

An introduction to Artificial Intelligence and Machine Learning.

Machine Learning, Artificial Intelligence, and a wide subset (often incorrectly swapped and referred to as); Computer Vision, Robotics or Chatbots, are now commonly used terms. It is almost impossible that you are not in contact with any number of these daily. Over the last ten years we have witnessed these terms gaining in popularity. Buzz words are often quickly adopted to captivate your attention and provide a marketing edge for new products. For example a vacuum cleaner, declaring to be using AI capable of detecting different types of dust as opposed to another that simply cleans the floor – (even if both do the same in any case) is misleading – or at least incorrect usage of technical terminology found in specialized publications, news and more.

Terminator

It has become common place to read stories of computers with alarming headlines warning us of ‘rogue AI’ turned off due to racist comments while interacting with people on social networks or the ability to over throw humanity. Listening to these tales you could be forgiven for thinking we are at the gates of a dystopian scenario closer to the movie ‘Terminator’. On the other hand we can find contrary views telling us Artificial Intelligence does not exist, and it is not expected to in the near future. Either way, the information available on these topics can be confusing if we are not immersed in it. So, let’s remove the veil and take a look at the basics of AI and ML to better understand what it is and how it works at a high level.

Artificial Intelligence, or Machine Learning, or Computer Vision?

The concepts behind Machine Learning and Artificial Intelligence get blurred into a single concept, but in reality they are different. Computer Vision (a particular branch of techniques, tools, and theories) is also often randomly attached to the first two concepts without the understanding of its separation.

AI, ML and CV with some touching surfaces

The three concepts can be broadly defined as:

  1. Artificial Intelligence.
    1. Programs that can sense, think and adapt.
  2. Computer Vision.
    1. Image processing. 
  3. Machine Learning.
    1. Deep Learning, convolutional neural networks.

Artificial Intelligence (AI)

It is our attempt to create machines that can challenge us, as human beings, in one of our most valuable assets; our intelligence.

Whether or not we agree on the concept of intelligence and its definition, we can agree that our understanding of Artificial Intelligence has also evolved over time. From the initial approaches in 1937 to today, we can see its application and understanding in self-driving vehicles, understanding natural language and robots to name a few.

Machine Learning (ML)

Machine Learning was once considered as a subset of Artificial Intelligence, but ML has now moved in a different direction. ML is the idea that applications and computer programs can ‘learn’ from (for) themselves mimicking human intelligence through pre-established rules and algorithms, a Machine Learning application can even adjust itself to reach the optimal expected result. Perhaps the most common use of ML is (in) training a model high volume of data, statistics, and mathematical formulas to create a prediction.

Computer Vision

When we talk about Computer Vision we are referring to giving computers the ability to see in a similar way as humans do. This ability is incredibly useful in creating software programs that can perceive the environment in the same way that we do.

Image and video processing to produce a particular output is where Computer Vision applications can be found. Applications such as taking a picture from our phones when someone is smiling, reading the car plate number when passing through a paid toll both, and identifying unusual structures from a ray-x photograph are common use places for Computer Vision.

Solving different problems

Considering these lightweight definitions, we can see that each of the concepts aim to solve different types of problems, and with that in mind, the nuances behind how they differentiate from each other are relevant when communicating and defining ideas around them. Each one can be split into more detailed categories for example Deep Learning and Natural language processing (NLP), each one of them attempting to tackle a particular (distinct) problem.

Are these ideas recent?

The straight (short) answer is “no”. Papers and applications can be found dating back to the early 20th century. In 1963 Lawrence Roberts published the paper “Machine perception of three-dimensional solids” explaining how a computer could construct a 3D object from a 2D picture. In 2001 Paula Viola and Michael Jones released the first “real-time face detection framework” allowing, as the name of the framework indicates, to detect faces in real-time. So we see that Computer Vision, Machine Learning and Artificial Intelligence have been around us for more than 60 years.

Perhaps the main reason for their rise in profile is due to the increase in computational power of our current computers and devices. This increase has now allowed us to build better and faster applications to implement these technologies more efficiently. We have recently recognized the natural evolution of the algorithms and understanding of these sciences; a natural evolution after decades of investigation.

Going deeper with Machine Learning

After more than 60 years of investigation and implementation it is almost impossible to pick one mechanism or method that could be used as the “silver bullet” to solve every problem.

There are many available paradigms, algorithms, frameworks and tools. We start writing down names like PyTorch, TensorFlow, Numpy, Yolo, NTCNN, OpenCV, MediaPipe, Keras, Lobe, .Net ML, (and the list can go on and on). Each solves a particular problem, gives support for the others, some already deprecated, some to work in together, overlapping with one another.

It is not possible to just pick one of them and pretend to explain Machine Learning with it without leaving many other concepts behind, unless we can reduce the concept by going back to a point in the past that makes it easier to grasp some of these ideas. We can define that starting point as the “Perceptron”.

Perceptron

While the perceptron was invented in 1943 by McCulloch and Pitts, the first implementation was made by Frank Rosenblatt in 1957. The idea behind the perceptron is to mimic the behavior of a neuron, replicating similar neuronal processes and structures to produce a particular computational output.

A brain neuron. Its structures and activation flow.

A neuron information flow can be described through:

  • Dendrites
    • A connection point with other neurons. Considered the “entry” point for signals.
  • Nucleus
    • It is the processing information area. It is where what defines that neuron, what it does, resides.
  • Axon and terminals
    • The “exit” point of signals. The axon terminals are connected to other neurons and might emit signals to activate them.

This is an oversimplification of how a neuron works but it helps us to understand its conversion into a perceptron.

Perceptron model – linear classifier

A perceptron is conformed with inputs and weights (Dendrites), an activation function (nucleus) and it output (axon and terminals). It is not the objective of this article to explain in depth the mechanisms behind each one of these elements, although it is still important to remark that this structure is enough to perform a basic binary classification through supervised learning. This meaning that, given a set of predefined data a perceptron can train itself on achieving (to achieve) an expected outcome. Perceptron once trained, can then classify similar data within the given categories, making it very powerful, and much more powerful if it is combined with other perceptron’s. Nowadays a perceptron can be easily coded using almost any programming language. We can easily find implementations made using C++, JavaScript, Python, C# and many more. Besides the today’s computational power, the simplicity of the model proposed by a perceptron makes it fit into no more than 50 lines of code.

The Reality

Machine Learning, Artificial Intelligence and Computer Vision are a reality, and can be used as a beneficial tool to play an important part in our organizations and applications. It is not new; not something that has appeared during the last 10 or so years but instead it is a compendium of ideas, investigations and practices that have been evolving for 60 or more years. This evolution of ideas and tools will continue therefore it is beneficial for us to understand the underlying concepts and meanings around these technologies. That they have implementations that overlap each other; each one of them referring to specific knowledge areas with distinct implementations. With thanks to increased progress, computational power, and general knowledge, it is equally important to analyze where these implementations can fit into our developments and organizations and how they can help us to improve our processes or provide better services to provide Technology for Life.

Standard
Computers and Internet

MockAPI v2.0.0 is out

Some significant changes have been done to MockAPI and now is ready to be installed using NPM.

These changes improve MockAPI usability, especially for users that have multiple NodeJS versions installed in their machines.

One of the main issues was (for some users) that running MockAPI from a particular directory it was not reading the configuration file from the same folder in which MockAPI was called. Instead, it was read from the directory in which MockAPI was installed.

To see the notes: https://www.npmjs.com/package/mockapi-msi

To install MockAPI run:

npm i -g mockapi-msi
Standard
Computers and Internet, HTML5, Inteligencia Artificial, JavaScript

Interactive Computer Vision

At the beginning of this year, I gave a presentation about Computer Vision, Machine Learning and Artificial Intelligence for kids. A tough task if we consider these topics are even hard to understand for experienced developers.

So, besides the standard presentation, a speech without too much tech blurb and a lot of comparisons with actual and palpable technologies (and videogames and social network apps), I created an app that might help those kids to build some Computer Vision apps from something that they, for sure, knew: Scratch!

Combining Blockly, Google Media Pipe, HTML5 and JavaScript, I built a Web app that allow you to create simple applications for hand-tracking and HTML Canvas interaction. So, with this app you should be able of create small programs to interact with your hands without leaving the browser.

Some examples of these apps (And you could give it a try too) as follows:

Painting with your fingers:

Tracking one finger and displaying an emoji in the screen:

A finger counting app:

The app is still online and you can find it here: https://blocklypipe.netlify.app/

And of course, the source code is here: https://github.com/MatiasIac/handDrawingMediaPipe/tree/main/blocklyPipe

Hope you enjoy it!

Standard
Eventos, JavaScript

Upcoming Online Event – NLP with NodeJS

Last week I participated as lecturer on one of the many conferences held during the New Zealand Tech Week.

On that lecture I talked about Machine Learning, Artificial Intelligence and Computer Vision for non-technical people. A walkthrough of those terms, examples and potential applications. This event was in person and in English (Yes, as this post :)).

Anyhow, as usual I created (Coded) several applications with the intention of showing these concepts applied to real scenarios. One of those is an implementation of NLP (Natural Language Processing) using ExpressJS, NodeJS and, of course, JavaScript.

Attendees didn’t saw, neither explored, the source code but played with the application. Remember, this event was for non-tech people.

In any case the source code was there, waiting to be analyzed, but no devs around that could benefit from it :).

So, together with MUG Argentina, we decided to do the conference for tech people. Of course, this lecture will be held online, on Argentinean time zone and in Spanish.

Soooo… if you speak Spanish and are interested on learning about NLP, register yourself on this event following the link:

https://mug-it.org.ar/event.aspx?event=590

See you there!

🙂

Standard
Computers and Internet

SQL Server for your local devs using Docker

Yes, that time when you ran everything in your local machine is getting fogy to the point that, if you don’t have Docker in your machine and runs everything there, your fellow devs start looking at you as if you were from a different planet, some kind of newie or if you are not prepared enough to be surfing at the top of the tech wave.

By the way… the same happened when jQuery landed, and then Backbone, and then Angular, and React, and Vue, and .Net, and… well, you name it. In any case, and at the end, everything is still the same thing and serves the same purpose does not matter how fancy the new tech stack name is.

Anyway, you could stumble facing the idea of run SQL Server inside of Docker because as in my case, I do not want to have that installed directly in my machine. So, let’s navigate through some very, very simple steps on “How to install and use SQL Server in Docker“.

I will assume that you already have installed Docker. If not, here is a tutorial. So, once Docker is in place, one way to install SQL Server is through Docker Compose. I prefer this way just “because yes“, but you are free of using any other way (Not provided in this tutorial :D).

Ok, enough! We have Docker, we will use Docker Compose and for that, we need to create a file describing what we want to configure. The docker-compose.yaml as follows:

version: "3.8"
services:

  sql-server-db:
    container_name: sql-server-db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
      - "1433:1433"
    environment:
      SA_PASSWORD: "YOUR PASSWORD HERE"
      ACCEPT_EULA: "Y"

So, there, nothing outstanding right? Clearly, at the moment of writing this post the Microsoft SQL image version that I am using is “2019-latest“. If you read this post in, lets say… 2 years, perhaps you will need to adjust these values.

Also, notice the password that you need to set. If not, your password will be… YOUR PASSWORD HERE.

Final step! Open a command prompt or console, navigates to where these file is stored and type the following:

docker-compose up -d

Once this finish, you should see something similar to this:

If you do, it is all set!

Now you have a Microsoft SQL Server database up and running in Docker!

Standard
Computers and Internet

Upcoming Events (English)

La versión en Castellano de este post aquí.

Next March I will be giving four of lectures about Machine Learning and Computer Vision. Two lectures will be held in English and then, their counterparts in Spanish.

I will be showing some open-source and free tools that will help us to model our Computer Vision projects and understand how we could be using them together with low-powered computers such as Raspberry PI and on videogames.

These lectures are totally free and will be streamed it on YouTube and other platforms.

If you want to attend (And read more about each lecture) follow the registration links:

https://www.meetup.com/Microsoft-Reactor-Toronto/events/283859833/

https://www.meetup.com/Microsoft-Reactor-Toronto/events/283859052/

See you there!

Standard
C#, Divulgación, Eventos

Próximos eventos (Castellano)

English version of this post here.

A finales de Marzo estaré dando cuatro charlas. Dos en Castellano y sus versiones en Inglés de la mano de Microsoft Canada para su programa Microsoft Reactor.

Las charlas tiene como temas principales Machine Learning y Computer Vision. Son totalmente gratuitas y serán de accesso libre por medio de YouTube y otras plataformas.

Si te interesa saber un poco más sobre herramientas de Computer Vision y ver algunos ejemplos prácticos, estos eventos te pueden interesar 🙂

Los links para inscribirse (Y ver más detalles del evento) a la versión en castellano:

https://www.meetup.com/Microsoft-Reactor-Toronto/events/283858996/

https://www.meetup.com/Microsoft-Reactor-Toronto/events/283859032/

Nos vemos por allá!

Standard
Desarrollo de Juegos, HTML5, JavaScript

JavaScript Game Framework (jsGFwk) v3 is out

Near to 10 years ago I decided to, finally, create my own videogame framework. A simple one but powerful enough to allow me (and any game developer fellow) to build 2D games for the Web.

As a bit of history, jsGFwk wasn’t my first one. Actually, years back I built another one for Visual Basic developers. That one allowed to build, again, 2D games for non-game developers. On that time C++ was the language of videogames excluding a lot of developers seeking to create games but feeling totally obfuscated by a language that was difficult to understand. So, I built a component that consumed the DirectX services and expose them in an understandable way for my fellows VB developers. I think that I still have the source code in an external hard drive somewhere.

Anyhow, the time passed and looking to learn more about videogame development, during a Global Game Jam event, with my team, we decided to create a videogame from scratch. Meaning: Framework and the game itself.

At first, we tried a couple of frameworks from the market but after hours spent on them, trying to get them to work in the way that we were looking to, modifying its source code, and even fixing bugs, we put all of that into the bin and coded everything from zero.

For sure, the resulting game was quite bad (to say the less) but we (And specially me) learnt a lot.

A couple of months later I was looking to do some coding. One of those moments when you have some free time and instead of spend it outside in the park, you prefer to code. Well, to make the story short, a videogame framework sounded as a good idea… right!?

I spent around 40 hours to create the core: Engine plus some plug ins. Because, yes, the most important part for me was to give the ability for this framework to be modular. Each plug in should be independent and provide a particular feature allowing the consumer of the framework use my plug in or opt for a different one, better coded, faster, et cetera.

The final result was exactly what I expected: A modular videogame framework.

This framework was used in many other Game Jams, a book was published about game development using this and other frameworks, a couple of universities used jsGFwk to teach videogame programming and even some master degrees on videogame used the book written for this framework and the framework itself as part of the curricula.

But, 10 years without upgrading or modifying the framework in any way it is a long time. Some updates are always required and, for this reason, I have evolved the framework to use a more modern approach. Still uses JavaScript and runs in browsers, but it has been written using the latest JavaScript version and removed (or improved) some of the bad coding practices that I originally introduced (I grew up as a developer, the framework did the same :)).

jsGFwk is an open-source project. The source code can be found here: https://github.com/MatiasIac/jsGFwk

jsGFwk is, as I said, a very minimalistic videogame framework, so tiny that only weights 19KBs but can do a lot.

If you want to do 2D games, perhaps jsGFwk is a good starting point!

Enjoy!

Standard