Random thoughts... - by Tocky
zombe on 3/4/2019 at 10:57
Quote Posted by Starker
We usually didn't have to take cards out of the deck at the beginning, but it was pretty much the same exact game.
I do not remember what the game was like when we got first introduced to it, but it started to evolve quite fast due to frequent play (lots of free time to kill in dorms when the computer-wing of the school got closed early). Having to remove some random cards from the deck was one of the things we ourselves introduced to it - to help prevent draws coming from 4-same-value-card-discard clashing with "exam" mechanics.
The random card removal and multiple card cheat (which seems to be a very common variant) are the only changes i know for sure we made ourselves. Maybe the "exam" part was also our modification (?). Don't remember. Have not seen anything like that mentioned anywhere else - unless i am blind.
Quote Posted by Starker
There are lots of games like this...
My google-fu must be craptastic. How did you find it?
Starker on 3/4/2019 at 19:06
Quote Posted by zombe
My google-fu must be craptastic. How did you find it?
I've played lots of card games with people from all kinds of places. A deck of cards used to be an easy and cheap source of entertainment before it was replaced by everyone staring at their smartphone screens. And this game has a name that's extra memorable, because you have to say it during the game.
Tocky on 6/4/2019 at 05:38
It was in the movie "Failure to Launch". Pretty awful movie but had a few mildly amusing parts. Mostly we play trash talk Uno which is the same as regular Uno but with outlandish pronouncements of dominance and imminent victory which are nearly always proven immediately wrong.
Tocky on 10/11/2019 at 23:07
I just got banned from facebook for a day. Someone asked a question and I answered it honestly and straight forward with no nonsense. Then they said they didn't care. So I said "Wait. You posted just to show your stupidity on a subject? Well done. You succeeded." And then they reported me and of course facebook in it's wisdom has chosen to have no humor or cognitive ability. It truly is a crappy system they have set up. Republicans will fish for photos of yours they can deface and place online for simply posting links to reputable news which debunks whatever lies Trump has claimed twenty times a day but facebook will never take YOUR OWN PHOTOS which have been stolen down. They suck. They seem to be set up to allow for abuse. Of course a lot of this could be avoided if I just let the Russian trolls and white nationalist get away with lying or if I used a fake account. Facebook has shut down the ability to create fake accounts to some degree but it's not impossible as any Internet Research Agency operative can tell you. It's just made it harder on the honest folks.
Still not going to shut me up. I'll just use words like ignorance or reasoning disabled or something. I'll have to test those out when they let me back on.
qolelis on 10/3/2020 at 11:34
I regularly roast sunflower kernels. Turning them means letting them fly and hoping they'll land on the not yet roasted side. Let's assume that most of them do. One hypothesis, that I like to entertain, to why this might be, is that the centre of gravity slightly shifts towards the not yet roasted side, and inanimate objects, when falling, have a natural tendency to flip around so that the centre of gravity is below the visual centre, meaning that the kernels are more likely to land on their least roasted side.
Why would the centre of gravity shift when a kernel is being roasted? One explanation -- among possibly many other -- could be that moisture trapped inside the kernel evaporates more from the side being roasted than the side not being roasted, meaning that the local specific weight decreases.
More experiments are needed to a) see how many kernels on average that flip and if that amount is (significantly) higher than it would be if the process was random (in which case half of the kernels would flip (on average)), and b) determine why that is (if it in fact is).
Of course, all of my initial assumptions might be false: Maybe it's all random. Maybe there is no moisture at all inside an average sunflower kernel. Maybe the centre of gravity doesn't shift even the tiniest of tiny bits. That's why more experiments are needed. An alternative hypothesis is that I have too much time on my hands, but -- like any other hypothesis -- this too needs to be either proven or disproven through more experiments.
demagogue on 10/3/2020 at 11:50
My random thought for the day was, somebody should set up a system that makes neural nets self-modifying and algorithmic (including algorithms to self-modifying the algorithms themselves). There's not really any hard law that says what functions you can make out of a neural net. Classically, people have been so unimaginative. Like an arrangement of pixels go in, and you decimate the activation space until you extract features and pick out objects, so it could tell you this is a flower or a kind of truck. But the kinds of features they extract could be quite abstract, and like I was saying you can link them to algorithmic outputs, and some of those outputs could be instructions that change the weighting of the net itself. Then you could have some evolutionary set up that randomize weighting conditions and keep the ones that maximize performance and cull the ones that fall below a certain threshold, etc.
I then went on to thinking about applications to my natural language gen project. But that's a whole article by itself I should make at some point.
Sulphur on 10/3/2020 at 12:15
Yeah, I'm pretty sure lots of people have had that thought. I think the obstacle is getting the algorithm to identify use case scenarios for modifying its operating conditions. We don't have neural nets with the kind of sophistication to make useful judgements or decisions for themselves yet.
Gryzemuis on 10/3/2020 at 13:49
Quote Posted by demagogue
and like I was saying you can link them to algorithmic outputs, and some of those outputs could be instructions that change the weighting of the net itself. Then you could have some evolutionary set up that randomize weighting conditions and keep the ones that maximize performance and cull the ones that fall below a certain threshold, etc.
Isn't that called "(
https://en.wikipedia.org/wiki/Artificial_neural_network#Learning) the learning process" of the neural network ?
demagogue on 10/3/2020 at 14:10
Sulphur actually got into where my thinking was going, which is the real trick here isn't that process by itself, it's parameterization, at least the way I was thinking about it. You structure the problem in terms of a narrow set of parameters that vary in structural ways, then the system can manipulate its decision-making space within the scope of those parameters. So I started thinking more about that than the general method per se.
It is the learning process, but the versions I always saw, the code was doing some tricks to sharpen the neural net output, but it wasn't the output literally acting on its own hidden connections, which then dynamically changes the outputs and that kind of cycle. It seems like that could descend into chaos, but that's where the parameterization comes in. This is all very abstract though. I won't really know what I myself am even talking about until I start playing with this for a real application.
heywood on 10/3/2020 at 16:36
That reminds me a little of genetic algorithms, which were all the rage when I was an undergrad. First you parameterize the problem space, then define the degrees of freedom and their limits, and finally the cost and/or fitness function. In simple applications, the parameters and degrees of freedom are just discrete or continuous variables, so the genetic algorithm is effectively just a multi-dimensional search, and not a particularly efficient one, so the method fell out of favor. However, there is no reason why the degrees of freedom can't include function space. I remember some of my fellow students applied a genetic algorithm to optimizing the design of a lifting body as their senior project, where they gave the algorithm a selection of different design heuristics to apply or not apply as it saw fit. It was a challenge because the different design methods had different parameter spaces. Beyond that, you can put a computer algebra system e.g. Maple inside a genetic algorithm, and apply it to problems where the solution is a not a set of numbers but a set of equations. For example, suppose you want to find a multivariate polynomial that fits a data set within certain errors bounds, and your cost metric is the number of variables in the polynomial, or maybe a better metric is the computational complexity of evaluating the polynomial.