Perhaps someday, artificial intelligence may be able to produce official LEGO
sets instead of just pictures of knockoff sets. The LEGO Group is already producing
a lot more new sets than they were in the past. Perhaps the increase in productivity
is due in part to artificial intelligence.
[…]
I think it's funny that a computer can't figure out how a book looks
when it's open.
I’m not sure it’s the appropriate time to go both technical and philosophical
but I’ll do it anyway
One way to look at it / to understand it is that the ‘computer’ doesn’t ‘figure’
anything.
The ‘AI’ has been given a ton load of labelled images. That is, images with
descriptions.
It made something (mathematically, a ‘function’), that ‘associates’ parts of
the images with the descriptions.
But we have no idea whatsover what is associated with what. What I mean by that
is that it may internally associate the word ‘dog’ with some group of pixels
or some relations that some pixels should have to be associated with ‘dog’ but:
1. We can’t extract from it what is the group of pixels or what is the association.
It’s all mixed up with all the other associations it made for the all the words
and images it got.
2. Even if we could, it would not be some king of stereotypical image of a ‘dog’
/ something we would recognize as a ‘dog.’
To understand how those ‘AIs’ are made, let’s look at a simplified example:
You have couples of values, say the number of parts in a set and the price of
the set. We, mathematically educated humans, would generally put these points
on a chart, with one value on X and one on Y. Then, we’d try to find a simple
relation between the points. First, something like a straight line. Maybe drawing
it with a pen and ruler. And we’d get something that would give us the formula
Y = a.X + b.
To have a neural network (what those ‘AI’ are) do that, we just need 3 (so-called)
neurons:
(X) —a—→ (Y)
(1) —b´
One neuron takes the value of X. One neuron takes a constant value (1). And
one neuron gives us the value for Y. The value of Y depends on the two other
neurons, with the simple formula Y = a.X + b.
We feed the learning algorithm with our data: X and Y, and it first randomly
tries values for a and b and changes these values to match the couples we gave
it. In some way, learning the correct values for a and b.
After some time, it finds good values for a and b… or we stop it because that
doesn’t work.
If it managed to get a stable solution for our input set, we can use it on other
values of X to compute/guess their Y.
As a human, if the simple line doesn’t work, we may try other functions, more
or less complex (exponential / logarithm, polynomes, trigonometry, etc.).
For a neural network, we keep using linear functions, sums and simple multiplications,
but we add neurons in between, generally in layers, with each layer totally connected
to the next/previous one.
So we get a very dense lattice (arrows everywhere but no loop).
You can approximate very complex and complicated functions with only linear computations
(well, provided you have more than one input neuron: you can’t get better than
Y = a.X + b even if you add many neurons on the simple example I took, too simple
an example 😅).
Those ‘AI’ we have now, they have millions of entries (and outputs) and billions
of neurons in between.
They are very complex… and we absolutely don’t know what’s inside and we have
no way to look or really ‘guide’ them.
Are they ‘intelligent’? One could say no: they are just a mathematical, linear
function of their input. They don’t know anything.
But then, one could also say we don’t know what ‘intelligence’ is. So we could
all be mathematical functions: we get inputs, we produce outputs, what’s in between
is a black (well, pinkish) box. Give us a picture of a dog, we associated it
somewhere in our brain with the word ‘dog’ and the sound ‘dog’ (if we speak English
) and we can say the word. Great! But are we ‘intelligent’ or ‘just’ complex?
(Sorry, very long and full of it’s-not-really-the-real-word-or-we-don’t-really-know-what-it-means-quotes
)
I just tried it and got these. It seems pretty cool
It seems to really struggle with the paper instructions whether open or closed.
None of them are rendered correctly.
I think it's funny that a computer can't figure out how a book looks
when it's open.
This is why "AI" is such a misnomer - the computer isn't "figuring"
anything, it doesn't "know" what Lego instruction booklets look like,
at least not from a set of principles - it just knows that certain images have
been tagged "Lego instruction booklet" or whatever, and produces something
that looks like those images.
I just tried it and got these. It seems pretty cool
It seems to really struggle with the paper instructions whether open or closed.
None of them are rendered correctly.
I think it's funny that a computer can't figure out how a book looks
when it's open.
This is why "AI" is such a misnomer - the computer isn't "figuring"
anything, it doesn't "know" what Lego instruction booklets look like,
at least not from a set of principles - it just knows that certain images have
been tagged "Lego instruction booklet" or whatever, and produces something
that looks like those images.
We know this.
The main problem, IMO, is "If it looks like a duck, swims like a duck, and
quacks like a duck, then it probably is a duck."
Humans produce a wiiiiide range of Quality (and I'm kind).
So whatever Magic (called AI or whatever process) produces at least the same
quality of say an average human result, then it starts to be impossible to know
for sure this product was "human created" or not - if that's important,
of course.
In the near future we may not care at all, and AI will be everywhere, like helping
you to write posts in Forum, e-mails, programming, illustrations, music...
The question of Intellectual Property will probably have to be revised.
One major problem could be "proofs" like video documents, as I could
make any actual video that shows almost whatever I wish to prove.
Maybe AI providers will be enforced by Laws to apply a kind of impossible to
remove (invisible) watermark? That's another technical problem we can't
solve as for now (apart using a blockchain for ex).
But as of right AI's just a new tool, like was Photoshop (-likes) at one
moment, and it brought kind of the same problems.
Which seems more or less solved and we're still living with this kind of
software, countermeasures (detectors) and laws.
I'd say it's right now both marvellous and freightning we've this
brand new AI tool, but the years to come has to find solutions to control its
usage?
[…]
The question of Intellectual Property will probably have to be revised.
In some cases, that could help (because IP laws are bonkers). In others, not
so much: still the big ones who’ll do what they want.
But there’s privacy protection problems: it appears ChatGPT regurgits e-mail
addresses now.
Was it that it had e-mail addresses in its training data and ‘memorized’ them?
Or was it just that it randomly put some firstnames and some names and some sites
to ‘create’ firstname.name@foobar.com and stumbled upon existing addresses?
But that would mean it at least ‘memorized’/“understood” the structure of an
e-mail address, and that means it was trained with some.
In both cases, it’s against the GDPR: either to keep e-mail addresses or to use
them as training data without explicit consent.
One major problem could be "proofs" like video documents, as I could
make any actual video that shows almost whatever I wish to prove.
Yes. Some argue that that problem has always existed, at least with text, pics,
and sound (videos were too difficult). But the big difference is how easy and
fast it is to do now.
Maybe AI providers will be enforced by Laws to apply a kind of impossible to
remove (invisible) watermark? That's another technical problem we can't
solve as for now
And who will abide to the laws and who will have ‘good reasons’ not to?
(apart using a blockchain for ex).
Uh, still trying to find a problem this “solution” may be able to solve?
Can’t read it.
Anyway,
1. Lots of announcements or whitepapers and never anything real that comes of
it.
What could make you not trust Chicago?
But in short and apparently, some laws already have considered blockchain to
be a proof of signature, contract, etc.
On Jan. 1, 2020, the Illinois Blockchain Technology Act became effective.
Applying these statutory definitions, the Illinois Legislature sets forth the
following permitted uses of blockchain in Section 10 of the Act:
Sec. 10. Permitted use of blockchain.
(a) A smart contract, record, or signature may not be denied legal effect or
enforceability solely because a blockchain was used to create, store, or verify
the smart contract, record, or signature.
(b) In a proceeding, evidence of a smart contract, record, or signature must
not be excluded solely because a blockchain was used to create, store, or verify
the smart contract, record, or signature.
(c) If a law requires a record to be in writing, submission of a blockchain which
electronically contains the record satisfies the law.
(d) If a law requires a signature, submission of a blockchain which electronically
contains the signature or verifies the intent of a person to provide the signature
satisfies the law.
Illinois addresses the admissibility of a certified record generated by an electronic
process or system in Rule 902(12) – a record generated by an electronic process
or system that produces an accurate result, as shown by a certification of a
qualified person. The blockchain technology complies with these statutory requirements.
It’s sites with bad cookies forms (no readily accessible “continue without accepting”
link…) or anti-ad-blockers that I can’t read
But in short and apparently, some laws already have considered blockchain to
be a proof of signature, contract, etc. […]
So, nothing new since PGP (1991) and our EU directive from 2000. Just the use
of the buzzword “blockchain” to sound “in.”
(One can note that the first two paragraphs are about preventing the sole presence
of the word “blockchain” to invalidate a signature while the other two paragraphs
are the positive acceptation of a “blockchain” as a signature. IANAL, maybe
a cat and all that, but it seems to me a & b are useless: it’s like saying a
signature can’t be dismissed because it uses red ink (a&b) and then saying red
ink can be used for a signature (c&d), the latter implies the former 🙄 )
To me the ai makes it look like a some sort of fake Lego brand. The jibberish
for the set name in like a 4-1 year old bricklin user was making this. I really
like the optical illusion design on the box of being a 90 degree angle on the
top, and a curve on the bottom! AI has a long way to go.
But it’s still amazing that it can even make something that “close”.
It is pretty cool! This is a more complicated design so that’s why it’s difficult
to recreate but depending on what the ai makes/what ai is used/how good the prompts
is, it can be indistinguishable from a normal picture
It is pretty cool! This is a more complicated design so that’s why it’s difficult
to recreate but depending on what the ai makes/what ai is used/how good the prompts
is, it can be indistinguishable from a normal picture
Yeah I've seen some of the speed champion type AI generated sets and they
come out pretty good. I'd imagine "lego set" and "type of car"
are straightforward enough.