Wednesday, October 7, 2020

AI/ML issue

When talking about AI, there is often seen the concept of "one-shot-learning". This is quite a basic understanding difference between small baby learning and machine learning ;-).

In machine learning to create a good quality model, one must provide a large number of sample (train) data.

However, in real life, a small baby quite quickly learns how a cat and how a dog looks like.

But the most visible example is learning that something is hot:

- on just one sample when the kid will burn his hand, he will learn what hot is and will never touch anything hot again - at least not on purpose

- if we give this to existing machine learning solutions - it will need thousand of samples, preferably of hot and cold, to learn and give prediction "don't touch is hot" ;-)


The answer to this dilemma is "one-shot-learning", where we try to build models that could learn from a single sample.





However, my issue is completely different: my baby was observing me, and the household. She noticed that we put lots of stuff into the trash can. She is interested. And since she is learning by repetition/duplication of what she can see ... currently she puts everything she finds into the bin.

By not breaking her confidence, so she will continue development at high speed;-) How to teach her that whatever she is doing is wrong? Basically: stop. 

The obvious note is: on multiple occasions during the day she constantly sees new examples (train data) that this is standard/expected behaviour.



Web 3 - blockchain layers

Layers from a blockchain perspective. My plan is to write 5 articles:  1 Intro: Web 1.. 2.. 3.. 2 Layers in crypto.  [this one] 3 Applicatio...