Last week I participated as lecturer on one of the many conferences held during the New Zealand Tech Week.
On that lecture I talked about Machine Learning, Artificial Intelligence and Computer Vision for non-technical people. A walkthrough of those terms, examples and potential applications. This event was in person and in English (Yes, as this post :)).
Attendees didn’t saw, neither explored, the source code but played with the application. Remember, this event was for non-tech people.
In any case the source code was there, waiting to be analyzed, but no devs around that could benefit from it :).
So, together with MUG Argentina, we decided to do the conference for tech people. Of course, this lecture will be held online, on Argentinean time zone and in Spanish.
Soooo… if you speak Spanish and are interested on learning about NLP, register yourself on this event following the link:
Yes, that time when you ran everything in your local machine is getting fogy to the point that, if you don’t have Docker in your machine and runs everything there, your fellow devs start looking at you as if you were from a different planet, some kind of newie or if you are not prepared enough to be surfing at the top of the tech wave.
By the way… the same happened when jQuery landed, and then Backbone, and then Angular, and React, and Vue, and .Net, and… well, you name it. In any case, and at the end, everything is still the same thing and serves the same purpose does not matter how fancy the new tech stack name is.
Anyway, you could stumble facing the idea of run SQL Server inside of Docker because as in my case, I do not want to have that installed directly in my machine. So, let’s navigate through some very, very simple steps on “How to install and use SQL Server in Docker“.
I will assume that you already have installed Docker. If not, here is a tutorial. So, once Docker is in place, one way to install SQL Server is through Docker Compose. I prefer this way just “because yes“, but you are free of using any other way (Not provided in this tutorial :D).
Ok, enough! We have Docker, we will use Docker Compose and for that, we need to create a file describing what we want to configure. The docker-compose.yaml as follows:
So, there, nothing outstanding right? Clearly, at the moment of writing this post the Microsoft SQL image version that I am using is “2019-latest“. If you read this post in, lets say… 2 years, perhaps you will need to adjust these values.
Also, notice the password that you need to set. If not, your password will be… YOUR PASSWORD HERE.
Final step! Open a command prompt or console, navigates to where these file is stored and type the following:
docker-compose up -d
Once this finish, you should see something similar to this:
If you do, it is all set!
Now you have a Microsoft SQL Server database up and running in Docker!
Next March I will be giving four of lectures about Machine Learning and Computer Vision. Two lectures will be held in English and then, their counterparts in Spanish.
I will be showing some open-source and free tools that will help us to model our Computer Vision projects and understand how we could be using them together with low-powered computers such as Raspberry PI and on videogames.
These lectures are totally free and will be streamed it on YouTube and other platforms.
If you want to attend (And read more about each lecture) follow the registration links:
Near to 10 years ago I decided to, finally, create my own videogame framework. A simple one but powerful enough to allow me (and any game developer fellow) to build 2D games for the Web.
As a bit of history, jsGFwk wasn’t my first one. Actually, years back I built another one for Visual Basic developers. That one allowed to build, again, 2D games for non-game developers. On that time C++ was the language of videogames excluding a lot of developers seeking to create games but feeling totally obfuscated by a language that was difficult to understand. So, I built a component that consumed the DirectX services and expose them in an understandable way for my fellows VB developers. I think that I still have the source code in an external hard drive somewhere.
Anyhow, the time passed and looking to learn more about videogame development, during a Global Game Jam event, with my team, we decided to create a videogame from scratch. Meaning: Framework and the game itself.
At first, we tried a couple of frameworks from the market but after hours spent on them, trying to get them to work in the way that we were looking to, modifying its source code, and even fixing bugs, we put all of that into the bin and coded everything from zero.
For sure, the resulting game was quite bad (to say the less) but we (And specially me) learnt a lot.
A couple of months later I was looking to do some coding. One of those moments when you have some free time and instead of spend it outside in the park, you prefer to code. Well, to make the story short, a videogame framework sounded as a good idea… right!?
I spent around 40 hours to create the core: Engine plus some plug ins. Because, yes, the most important part for me was to give the ability for this framework to be modular. Each plug in should be independent and provide a particular feature allowing the consumer of the framework use my plug in or opt for a different one, better coded, faster, et cetera.
The final result was exactly what I expected: A modular videogame framework.
This framework was used in many other Game Jams, a book was published about game development using this and other frameworks, a couple of universities used jsGFwk to teach videogame programming and even some master degrees on videogame used the book written for this framework and the framework itself as part of the curricula.
Last week I participated on the local (Tauranga, New Zealand) STEMFest talking about computer vision. What, for me, it has been a huge milestone.
If you usually follows my posts and activities, you already know that giving lectures, participate in different events and publishing techies stuff it is quite common in my life. But this time was different.
The audience in this event were kids in a particular age range: From 8 to 12.
Clearly this is not easy task. Even more when you are trying to teach complicated topics such as Machine Learning and Computer Vision. These topics are even hard for experienced developers. Imagine how weird it could be for a kid hearing an old, bearded, odd :D, developer talking about bits, matrixes, neurons, et cetera!
So, I was impelled to move out of my comfort zone. I have given hundred of lectures (In Spanish) for tech people. That’s easy! The question was: How I can teach this topic to these kids, avoiding to be boring and keeping them engaged?
Here, in New Zealand, kids are taught Scratch when they are in the final years of the primary school. A simple induction to this framework that help them to understand some basic concepts behind programming and computers.
Considering this, the answer was quite clear: I need to use some Scratch-like model that combines some of the computer vision concepts and let the participants to build their own implementations. So, I built it!
So, with all set, I jumped into the event. I must confess that I was terrified. I can talk for hours in Spanish (My native language), although nowadays I could say that I am bilingual, unless that you dedicate a lot of time to improve your second language skills, you always will sounds different. Your accent, your grammar, your vocabulary, sentences that your brain had constructed in your native language that doesn’t apply to the second one… there is, always, something that could put a barrier between your thoughts and the receptor of them.
The event started, the time flew. Everyone were totally receptive. Smart questions coming from these young minds. Just amazing!
At the end of the event, the organizers asked the participants to write down some of their thoughts about the event. The answers were the proof that everything worked perfectly.
If the picture isn’t clear enough, here are some of these lines that caught my heart:
Last year I was about to show around 500 kids some applications during a STEM Fest event. Sadly, COVID did its magic and the event was moved. We tried again few months later and again COVID attacked and the event was finally cancelled.
During that time I was waiting for upload the code to a repository. I thought: I will give the presentation, fix any possible issue (What better than having 500 tester) found and then push it to the repository.
Well, now the event is finally happening at the end of this month, but this time it will be a little bit different. Instead of a show case, it will be a workshop. So, I need to be able of teaching to another 300 kids on the age range of 8 to 12, how to build a ML application without involving too much coding and, for sure, not involving any of the complicated parts of writing a Python application with OpenCV, Numpy and so on.
This means that I am building another application using Blockly and other tools and also means that if I do not push the Python one to a repository, I will lost it.
I continue improving MockAPI adding to it small (but interesting) new features. These features are part of my original idea for this tool; things that I would like to have, specially because I tend to forget how to use the code that I write. What I mean with this is there are some part of MockAPI that I still find complicated to memorize (and unnecessary to do) such as the configuration file.
Yes, the whole idea behind MockAPI is to simplify the development and having a tool simple enough that you should not invest significant time trying to configure it. And even considering that you only need a configuration file to get MockAPI ready, knowing each part of it could cause some headaches.
So, now MockAPI has a small CLI that (for now) will create the configuration file required to run it. Of course, you will need to fill the gaps, but that should be expected 🙂
Currently the CLI has only two options: –help and –init
As you can imagine, the first one shows the commands list (which will be expanded in some point) and the second one will guide you (making some questions) on the creation of the initial configuration file.
* let me call it this way, sound quite fancy 🙂
First of all, you need to have an account there, but that is the trivial part (To be honest, all looks trivial because it is easy and simple). So, if you do not have one, go here and create your account: https://www.npmjs.com/
So, why to do this? The quick answer it could be: To share your code with the community. And actually, it is a good answer. Without entering into the philosophical territory, lets agree that you have something cool that you would like to share with the rest of the community in a way that is contained and easy to be reused. For sure, you could left your code in that Git repository, but usually other developers would need to remove all the unnecessary, extra code, that you need to test your solution when you could be just providing the exact entry point to it. Also, even if you do not want to share with the community, you might want to share it internally, within your team, and this is also a good place and way to do so.
Anyhow, lets continue with the juicy parts of publishing your package.
So, you already have your account in place and you have installed NPM (NodeJS, etc.) in your machine. Our next step is to create the well-known package.json file. Yes, the same package that you use in your solutions to import and refers other codes is the same that you will need to publish your very own code.
You could copy and paste this file from any random project or run the following command in your working directory.
From there you just need to follow the steps. Answer the questions. Not all of them are required and you can skip some of them.
Once we have filled up these fields, you get the package.json file. As I said, this file isn’t necessary complex and the initialization process hasn’t done anything special. It is just a .json file :).
There are much more information that could be added into this file that will help us to describe better our project. In particular when it is published into NPM. But in reality, you are not required to have all that information in place. Recommended? Yes. Required? No.
In any case, there are a couple of important fields that you should consider: Name, Version and Main. These three are crucial. Name is actually the real name of your code inside of NPM. Which means that must be unique. If any other package already holds that name, you will need to pick a different one. Version refers to the – yes – code version. As good developers it is good to keep track of our code version and, additionally, provide feedback to other developers. Bear in mind “versioning” when you are making changes to your code, evolving it, et cetera. Finally, main field refers to the entry point of your code. Which file from all your code it is considered the entry point. In other words, your “main” class.
I will not discuss here what your main class could or couldn’t do. That depends of your ideas and what you are looking to publish. Just keep in mind that other developers could be including your code into theirs or running it as a CLI or referring it directly from the package folder.
Ok, you have your package.json and code in place. Now it is time to publish it.
You need to log in into NPM from the console. Run:
Once again, follow the steps: Username, password, et cetera.
This will associate your current session to your credentials on NPM. If all works as expected, you are ready to go. Next step is to publish the code.
As mentioned earlier, you could face a couple of issues when publishing. Perhaps one of the most common one is the name of your project. You will get very descriptive errors based on what could went wrong. Fix the issue and try again.
Once it is finally published, NPM could required a couple of minutes to be effectively processed. You will receive an email letting you know once all is set.
If you are like me and have the need of having multiple NodeJS versions in your machine just for the LOLs, one of the best tools out there is NVM (https://github.com/coreybutler/nvm-windows). This tool allow you to have multiple NodeJS versions and switch between them at will.
But also, if you are like me, you could have installed your Windows machine adding a whitespace to your profile, causing NVM (And many other tools) to do not recognize the path to some required stuff correctly.
Yes, I used “stuff”, because I want to keep this post short.
Anyways, you have reached the point where you have NVM in your machine, have installed a couple of NodeJS versions and tried to activate one of them… then you realize that NVM says something like:
“exit status 1 C:\Users\Matias not recognized…”
You start scratching your head wondering why you let to get tricked by Windows when you installed your copy and why if the rest of the applications are working as expected, this vital one doesn’t.
So, you go to the almighty internet seeking for help and you find people talking about re installing the whole machine as the ultimate solution or even modifying the whole Windows registry to make Windows believe that your profile is somewhere else. NONSENSE!
This is crazy-talking. The solution is simpler than you could think. Firstly, open the File Explorer and locate your profile.
Once in there, edit the “settings.txt” file. In there you will find two lines used by NVM. The first one (root) refers to where NVM “stuff” are located and is, actually, the one causing issues. You should see the whitespace of your profile there. So, change it to use the good-old way of representing directories for Windows and you will have all set.