Hit Deity on 1/3/2023 at 20:51
My favorite so far has been with it telling me that there are 5 regular polygons that can infinitely tile a 2D surface (plane)! Conventional mathematics says there are only 3, but it insists there are 5. I asked it to explain how an 8 pointed star is considered a regular polygon, and it said they weren't... And the other one it stated was a dodecagon. I said I'd like to see how that's possible!
And this one came about because I asked it (knowing the answer) if bees could make equilateral triangle cells, could they use those to make honeycombs, and it said No, it wasn't possible, because there would be gaps..
So I said 6 equilateral triangles can be arranged into a regular hexagon, and it agreed that was possible. So I said that would then make it possible to tile eq. triangles in the same configuration as the hexagons and it said it wasn't possible.
It then proceeded to instruct me that eq. triangles would be laid down with one base along a straight line, leaving "gaps" in between them.. I then asked it what shape those gaps wee, and it responded "equilateral triangles" but it STILL wouldn't admit that eq. triangles could form the same tiling pattern as hexagons could.
Anarchic Fox on 7/3/2023 at 03:01
Multiplying quaternions instead of matrices is nice, since it's easier to figure out what quaternion goes with a given rotation than to figure out its orthonormal matrix. That's the extent of my knowledge in that area though.
Broadly speaking, quaternions are discouraged in physics, because they're realizations of SO(4), not SO(3,1). Maxwell came up with a quaternionic description of spacetime before Einstein's tensorial one, but the difference in sign on the time component proved crucial.
Quote Posted by demagogue
So I spent some time just trying to figure it out for myself. Part of me wanted to write it out here, but it's gonna be way too long than is good for a thread, so I threw it (
https://chaopolis.substack.com/p/quick-visual-intuition-for-borns) here if you're interested in seeing it. I actually got excited about coming to this understanding because it's a question that had bothered me for so long, and the explanation isn't even all that complicated (once you have a little background knowledge; it might still seem complicated if it's your first QM rodeo.)
A large part of modern physics consists of finding square roots for things you wouldn't think have square roots. The wave function gives a sort of square root of probability, the Dirac wave equation gives a set of non-commutative square roots of the identity operator, and spinors in general relativity give a square root of length and/or duration.
My attitude is to treat Born's Rule as axiomatic, but that may be due to long exposure. There are some problems with what you posted on your blog, but I don't know whether criticism is welcome.
demagogue on 7/3/2023 at 05:28
Oh for sure there are a lot of problems with it. I already disowned it before I was halfway through it. For one I was mixing or conflating spinors with operators that don't work like they do, and vectors and functions. Criticism, I'd call it instruction at this point, is fine, but I'm just on lecture 7 and still going through homework problems. So my views are changing by the day, nothing is really consolidated yet.
Still, I think it's at least heuristically useful, at least in terms of a mnemonic to remember elements at this early stage, to think of the wave function as a unit-1 vector rotating around orthogonal eigenvectors.
Apparently you can model a rotation as two reflections, once around an eigenbasis & then around a target axis, which at least facially looks like what <Ps*|Op|Ps> is doing (two reflections of Psi(_); I'm not clear about that though, only that a complex conjugate reflects along a basis, so that could well be wrong). Then at the halfway point between two measureables, the dot product to the eigenbases just happens to be at 1/sqrt(2) (1/3 point, dot product to far basis is 1/sqrt(3), etc.), and if you square it you get the 1/2 point (1/3, etc).
It fits with the rough idea that "unitary" means probability always sums to 1, which means keeping a state vector normalized to length "1", and "linear" means, when you have two possible outcomes, "what it takes from A it linearly gives to B" (and so on for multiple outcomes, as a superposition; as in, it's not taking from A and not giving to any other possible outcome, something disappears, or probability comes in from nowhere. Not that.). If you modeled that as a rotation with a statevector over eigenbases, if it could follow a -0.5 slope from A to B, then if you take 1/3 from A you'd directly give 1/3 to B (I mean in the sense of the superposition principle again, evolving from |1A> to |sqrt(2/3)A>+|sqrt(2/3)B>), but then your vector is changing length every step. But if you just keep it length 1, all you have to do is rotate it, then it follows a circular arc. But then you square the dot products, the eigenvalues, it's like you're projecting from the arc of the circle on to that 45 degree line, and projecting that on to the two eigenbasis axes gets you the right probability for those outcomes. Squaring the eigenvalue gets you the probability. You get the best of both worlds; simple linear transformation and right probability.
I'm not saying that kind of mnemonic is going to hold water for long. For all these operators we're dealing with eigenfunctions not vectors. There's a phase factor in a complex plane that's not represented here. Etc. But the linear algebra logic and thinking of vectors and eigenbases and eigenvalues as positions on orthogonal values still helps. In learning the stuff, it's like a rough image I can have in my mind at least. It wasn't really about the meaning of Born's rule. It was about having an image to help me grope at what math to do when I'm facing a problem, but with that kind of packaging.
I'd of course welcome you clarifying things I'm getting wrong or leading me astray. My goal is to ultimately learn this stuff after all.
Edit: In the very broad sense, I've heard a couple of places, those kind of wiki articles and tutorial videos that aim to "explain Born's rule", that the linearity postulate is about a) linear transformations or linear evolution of the state aka the superposition principle, and the probability is about b) taking the entire possible state space as "1" and dividing the "state you want to measure being in" as a proportion of that total space, and (a)^2 = (b).
Edit2: By the way, your comment on spinors is nicely timed. Eigenchris dropped his video today on (
https://www.youtube.com/watch?v=D4bh2kMZ0ww) "A spinor is the square root of a vector", although he qualifies that framing of it. He says it's better to call it a factorization than a square root per se, but anyway you get where the idea comes from.
Edit3: The other problem is that it takes so much time and energy (and walls of text evidently =V) to say anything in this language, and at least I'll be paranoid about every sentence. I'll grant it's not very responsible for me to say much with any confidence until I've got this stuff consolidated, which is a painstaking process.
Anarchic Fox on 8/3/2023 at 04:41
Yeah, rotations being products of two reflections is another example of something having an unexpected square root.
Quote Posted by demagogue
To measure the likelihood of any outcome, you just draw a perpendicular line from the vector tip to whatever axis you want the probability of (dot product).
This is a fundamental error, and you should disabuse yourself of the idea.
It's fine to think of spinors as vectors, because they are. A two-complex-valued spinor is a four-real-valued vector, with the standard Euclidean norm. A unit spinor is also a unit vector, so a value for the spinor will exist on the surface of a four-dimensional unit sphere. Now, you can choose any point on that sphere to be a spin-plus eigenvector in the (3D) z direction. There will also be a spin-minus z eigenvector, which as you note will point in a direction orthogonal to the spin-plus one. Treating these two directions as axes, you're still left with two more mutually orthogonal directions, as you can fit a total of four orthogonal vectors on the 4D unit sphere.
So it seems like you can fit "another outcome" onto an extra axis, like measuring spin in the positive x-direction. But you can't. A direct way to show this is just to write down some eigenvectors for every 3D axis. Using the standard labeling, here are:
* +z: (1,0)
* -z: (0,1)
* x: (1/sqrt(2))(1,1)
* -x: (1/sqrt(2))(1,-1)
* y: (1/sqrt(2))(1,i)
* -y: (1/sqrt(2))(1,-i)
If you try calculating dot products, you'll find that the only orthogonal pairs are the pairs along the same axis. And there's a geometric reason for this. Multiplying a wavevector by any unit complex number produces a physically equivalent wavevector. So you don't just have one eigenvector for the +z direction, you have a circle, like the unit circle in the complex plane. Likewise you don't have just one eigenvector for the -z direction, you have a circle of them. And because every spin-plus eigenvector is orthogonal to every spin-minus eigenvector, these two circles are orthogonal to each other at every pair of points, even though they lie on the same hypersphere. (This is impossible on a 3D sphere, where any two circles will intersect.)
So, your spin-plus and spin-minus eigenvectors take up two non-intersecting, mutually orthogonal rings on the hypersphere. And they crowd the hypersphere: you can't fit another vector onto it that's orthogonal to both rings. So, far from being able to measure the likelihood of any outcome using dot products, you can
only use dot products to measure the observables that you have used as a basis. It gives you spin-z and spin-minus-z, but not x, -x, y, -y, or any other.
A more abstract way to state this criticism is to note that while a two-complex-dimensional normed space is equivalent to a four-real-dimensional normed space, a two-complex-dimensional
inner product space is not equivalent to a four-real-dimensional inner product space. Now, you do calculate probabilities via squared inner products, but complex inner products are markedly different from dot products. As described in the previous paragraphs, they don't measure a projection along an axis, they measure a projection along a circle. The magnitude of the complex inner product between two unit spinors gives the cosine of the angle between the corresponding (4D) vector and the circle, while the phase gives its orientation along the circle.
There's more I can criticize, but I made the most important criticism, so I'll wait to see how you respond.
By the way, thank you for this opportunity. I am deeply fascinated by a certain class of groups, namely wreath products of two holomorphs of polygons, and subgroups thereof. These groups can be represented as actions on two mutually orthogonal rings on a unit 4D hypersphere. But despite a long fascination, I had never made the mental connection with basic QM spinors!
demagogue on 9/3/2023 at 21:28
Thank you for this. I'm happy that you've elaborated more than I expected and in ways I need to see.
Generally, I've prematurely jumped ahead to spinors. The problems I'm working on are still position and momentum, just now getting into energy operators and bounded states. But I was thinking spin was easier as a toy example because it's discrete states, whereas position and momentum are more unwieldy for what I was thinking about, I mean in terms of every position or k being its own orthogonal basis with coefficients being integrated. And I think I said somewhere about restricting my toy problem to one real plane, although granting that whatever that is, it isn't a spinor anymore. It's good that you bring my attention to how vectors and inner products work in higher dimensional complex space and ward me against oversimplifying. I'm very curious to follow up on the differences between 2-complex-D vs. 4-real-D spaces, so it's good to point me in that direction.
I think even more basically, I'm in something of a vacuum studying this stuff. I know some stuff. I know that I don't know some stuff yet. But there's a lot of things I'm not sure what I know because it's still slowly consolidating. I'm trying to go through Allan Adam's MIT 8.04 course up on YouTube, readings, homework and all, when I have time for it. It's a good course but still not quite a proper way to learn.
At the same time, I can't resist trying to push my understanding with little toy problems or questions, either from something I've read or pose to myself. I don't doubt how riddled they are with errors and misunderstandings, but I guess the virtue is that they can pretty quickly highlight the limits of my understanding and point me to fundamentals I need to work on. Thank you for helping me a little on that road. Ultimately, anything worthwhile should be hard if one wants to grow.
Anarchic Fox on 10/3/2023 at 04:29
I'm happy to provide. I gained some benefit too. My first posted version had a very embarrassing mistake, which I won't make again. Also, grappling with 4D space is a rewarding exercise.
Spin is the simplest part of QM mathematically, but not conceptually, because it lacks the intimate connections with classical mechanics that other areas have. If you want to supplement your study, my favorite QM textbook is Shankar's. Also, while I understand the urge to jump to the interesting stuff, the better you understand classical mechanics the better you'll understand quantum mechanics. They mirror each other. Taylor's textbook is good for nonrelativistic QM. You would need to study electromagnetism to gain comparable insight into quantum field theory.
Here was the second biggest problem:
Quote Posted by demagogue
As the system evolves, the rough rule is the closer the vector is to an axis, the faster it gets pushed away towards a middle point equally distant from the axes.
All spinors have the same entropy. Zero entropy, in fact, when there's just one particle. This is because every spinorial state is an eigenvector for
some spin operator, but most of them are for off-axis operators. To correct this problem and describe entropy, you need to use density matrices. And to
produce entropy on the quantum-mechanical level, you need to introduce even more advanced physics, which is above my level.
Something similar happens with infinite-dimensional wavefunctions on a bounded domain. Take the infinite square well as an example, whose energy eigenvectors span the space. Say you also have an arbitrary normalized wavevector. Now, subtract from each energy eigenvector the projection of the wavefunction along it. The set of these eigenvectors, along with the wavefunction itself, form a different orthogonal basis for the square well, and each basis element will have a corresponding operator. (I think the operator, when written in position space, is convolution, but I'm not certain.) Among these operators are one for which the original wavefunction is an eigenvector. It thus has just as little entropy as an energy eigenvector, because it's also in a definite state. I don't know whether this happens for arbitrary wavefunctions in an uncountable-basis (i.e. unbounded) space.
Edit: I found a problem with the above paragraph. While the basis generated will be an eigenbasis for
some Hermitian operator, that operator might not be analytic in the position and momentum operators, and therefore not a valid QM operator. Still, you should retain the broader point that there are operators/eigenvectors that don't correspond to position, momentum, energy, or indeed anything familiar, and a randomly chosen wavefunction might turn out to be an eigenvector in its own right.
Edit 2: I came up with a physically valid wavefunction that corresponds to no valid physical operator. Another quirk of infinite-dimensional systems, I suppose. I can explain it if you like, as it's not very complex.