Skip to main content


An interesting aspect of fractions and prime factorization is that multiplying fractions is the same as adding the prime numerators and subtracting the prime denominators.

For example, multiplying 42 by 5/14 means incrementing the power of prime 5, and decrementing the power of primes 2 and 7 — For a result of 42. A division is simply the inversion of the effects of the numerator and denumerator.
in reply to Devine Lu Linvega

(referring to your earlier toot about dividing by zero) This is why zero is bad and should be avoided. You can multiply by any other number and you can go back by dividing by the same number. But not with zero: ban zero now! As you can see, I have an infinitesimal-tolerance policy.
in reply to Andy Alderwick

@alderwick can zero be presented in the format in the picture above? Like, does zero exist in the prime factorization layout? If a multiplication by 0 is equal to setting all prime factors to zero, zero has in a sense awareness of what is being multiplied by it, does it not?
in reply to Devine Lu Linvega

@alderwick I have no vocabulary to talk about programming, but I have even less for maths, so put a helmet on, I've made a picture.

Multiplying by zero has the effect of unsetting one, which is not a prime, I don't even-
in reply to Devine Lu Linvega

@alderwick no, there's no exponent that you can raise a prime number to which gives you zero.

2^0 * 3^0 * ... = 1.

you can only reach zero using zero and once you're there you can never leave.
in reply to ⛧ esoterik ⛧

@alderwick for addition, zero is the identity element (x + 0 = 0 + x = x).

for multiplication, one is the identity element (x * 1 = 1 * x = x). zero is called the annihilator (x * 0 = 0 * x = 0). addition does not have an annihilator.
in reply to ⛧ esoterik ⛧

@d6 @alderwick is it tho? For one input bit, there are two possible reversible gates. One of them is NOT. The other is the identity gate, which maps its input to the output unchanged.

If annihilation is the mirror to identity, shouldn't it be reversible?
in reply to Devine Lu Linvega

@alderwick annihilation isn't a gate (or a mirror) in this interpretation. you are throwing away the information contained in the other argument. it doesn't depend on the input at all.
in reply to ⛧ esoterik ⛧

@d6 @alderwick so zero can't exist at all in a reversible framework where you go back and forth with multiplications and divisions
in reply to Devine Lu Linvega

@alderwick correct. if i give you X and say that i just multiplied some number by 1 you know that the number was X.

if i give you 0 and say that i just multiplied some number by 0 you can't possibly know which one i used.
in reply to ⛧ esoterik ⛧

@d6 @alderwick just looked it up, and all the results about this are on quantum computing in regards to entropy loss *swimgs back to the shallow end of the pool*
in reply to ⛧ esoterik ⛧

@d6 @astrid @alderwick I will 😀 I was wondering, do any of you know what incrementing a fraction would mean? Incrementing the denomator only would mean that depending on the ratio of the fraction the size of incrementation would differ?
in reply to Devine Lu Linvega

@d6 @astrid I haven't heard of that! In my experience, talking about fractions quickly becomes vulgar.
in reply to Andy Alderwick

@alderwick @d6 @astrid me neighter but I was wondering, does this get caught in "something something theorem of unexplicability", or ..

I understand that Incrementing 2/1, would expect 3/1 to come out the other end. It seems like we're adding the numerator over the source.

Would incrementing 2/3 be 4/3?
in reply to Devine Lu Linvega

@alderwick @d6 @astrid
if incrementing means "add $denominator to the numerator" it works out in both directions even if you simplify fractions along the way:

inc 2/4 = (2+4)/4 = 6/4
6/4 = 3/2
dec 3/2 = (3-2)/2 = 1/2
1/2 = 2/4

is this what you mean?
in reply to Devine Lu Linvega

@alderwick @astrid i don't know why incrementing 2/1 results in 3/1 (except that incrementing 2 results in 3).
in reply to ⛧ esoterik ⛧

@alderwick @astrid if increment means "adding 1" then incrementing 2/3 gives 1+2/3 (i.e. 5/3). if it means something else we have to figure out what rules "incrementing" obeys (if any).
in reply to Devine Lu Linvega

@alderwick @astrid the only two things i can think of are:

- add/subtract 1
- for fixed point (or floating point) find next highest or next lowest value

the second one doesn't work for rational numbers in general (you can always find a rational number between any two rational numbers that aren't equal)
in reply to Devine Lu Linvega

i guess the other thing too is that i don't think there's much use for an increment of the rationals. if someone really wants it for their own application i guess they could just implement it themselves
in reply to automatic server maid

@astrid @alderwick @d6 people seem to say it's related to the collatz conjecture, which seems to be a central theme to my morning
in reply to ⛧ esoterik ⛧

@astrid @alderwick i don't believe in basilisks but if i did i would be worried about the collatz basilisk not the one roko talked about
in reply to Devine Lu Linvega

@astrid @alderwick i think it depends on what you mean by "increment" 😛

there's no "next largest rational number" (unlike integers) so it's not clear there's one particular thing here.
in reply to Devine Lu Linvega

@d6 @alderwick It's not specific to quantum computing. If we want super-low-energy computing, we need reversible computing. Quantum computing is reversible, but there are other ways, for example ballistic deflection transistors. They use the same set of reversible gates. See e.g. https://research.ijcaonline.org/ncict/number8/ncict058.pdf
in reply to Devine Lu Linvega

@d6 @alderwick For CMOS, the overhead means you don't actually save any energy. It's one of the candidates for "beyond CMOS" technologies.
Another problem is that the one you have been discussing: if your algebra has an annihilator, your computation is not reversible. So basically we would need to compute and program in quite a different way.
in reply to WimⓂ️

@wim_v12e I'll remove @d6 @alderwick from this thread because I have some very naive questions and I don't want to blow up their notifications, add yourself back if you're interested in this sort of thing ^^
in reply to Devine Lu Linvega

@wim_v12e Is the lack of annihilator really that much of problem?

I'm trying to figure, is there a variance of Uxntal that could be reversible?

I'm thinking something like a loop:

#1000 &l LDAk #18 DEO INC GTHk ,&l JCN

I feel like this could be run in reverse without annihilation, maybe. So at the end of the loop, I would normally POP2, if I knew its exact value then, could I "unliteral" it? Like, #1010 SUB2 , so this whole thing could be run backward?
in reply to Devine Lu Linvega

@wim_v12e In the in-Varvara debugger I was working on earlier in the year, I was planning to have stepping through code reversible, by storing the effects of POP2 in extended memory so that I could grab those numbers back if the execution is stepped backwards.

Thinking more generally, though, the “#18 DEO” is shortspeak for “print this byte to the Console device”: how do we make that reversible? This Uxntal variant would need to store the information about every byte you print, 1/2
in reply to Andy Alderwick

@wim_v12e or store every Screen pixel that gets overwritten with something new.

With each step you are storing more and more information about the computer's state that in ordinary languages we would throw away. We printed a string or overdrew a UI widget, we don't really care what was there before. 2/2
in reply to Devine Lu Linvega

@wim_v12e We could! I was also using it as a tool to illustrate the constant generation of this data as a program runs, the majority of which we throw away. If a subroutine was called every screen refresh to draw some sprites in a loop, even ignoring device data, the loop values that get POPped off the stack are also not of any value *unless they're caught up in bug hunting*.

It's the last bit where I think retaining data and reversibility is most helpful.
in reply to Andy Alderwick

@alderwick @wim_v12e I think this might need some clever hacks, could we write uxntal in a way that does the least amount of destruction as possible?
Imagine that we could also unliteralize values to reduce that garbage, is there something to explore with that? I'm not sure exactly how that'd work, but if you know a value in the "graveyard" stack precisely, you could UNLIT/UNLIT2 it?
in reply to Devine Lu Linvega

@wim_v12e 🧐 I think with my sprite drawing loop example, you could park the loop variable in a storage slot in the routine itself, and bring it out of storage when the loop is called again *or* when you reverse back into the previous call.

If there's room for ten sprites, then it's obvious that an “UNLIT 0a” thing would work. My problem then is dealing with numbers that come from external sources, such as the mouse pointer location or the current time of day.
in reply to Andy Alderwick

@alderwick @wim_v12e UNLIT 0a would POP 1 byte with the value exactly of #0a from the graveyard in reverse. You might have to get rid of the itterator too, but in most cases they'll have the same value, so UNLIT2 0a0a.

So, #ff DEI2 pops 1 byte, in reverse if would push one literal from the graveyard, and not trigger any device event, just stack effects I suppose.
in reply to Devine Lu Linvega

@alderwick @wim_v12e UNLIT-ing at the end of certain LIT creation might be a good way to keep the graveyard small, it would probably enforce a kind of programming style too.

Even instructions like add would have to add to the graveyard, and would have to unlit there second param.
in reply to Devine Lu Linvega

@alderwick I think when people discuss reversible computation, I/O is generally not considered as computation. Basically, I/O will generate entropy, there's nothing you can do about that.
in reply to Devine Lu Linvega

I was thinking about the problem with the actual mathematical operation. But I think that is a false problem in this context. I think that is not what is meant by reversible.
I found an interesting programing language that implements time-reversible computation:

http://revcomp.info/legacy/mpf/rc/janus.html
in reply to WimⓂ️

@wim_v12e I wonder what the difference is between time-reversible computation and normal computation with opcodes that can be mirrored.

I understand if I ran a program that INC, instead DEC, I should find my way back to the original state. That seems to be what uncomputing means, is there an angle that I'm missing?
in reply to Devine Lu Linvega

No, you are right. It's easy for INC and DEC and so you can generalise to ADD/SUB and MUL/DIV but then you get into the issues with a*0 and a/0 so it gets a bit more tricky.
But where it gets really tricky is when you add control flow. In particular when you consider recursive function application. Basically, it means you need to be able to follow your path back through all jumps.
in reply to WimⓂ️

@wim_v12e yeah, I'm a bit stumped by this right now, I could add "entry point returns" right before a label so when I run the clock backward I can return, but there has to be a better way - do you have a link on this topic I could have a look at?
in reply to Devine Lu Linvega

I don't have a reference, would need to dig deeper myself.

But the way I see it: if it's not recursion, you can inline the function so no problem.
If it is a recursion across multiple functions, you can still inline until it becomes a single function.
If the recursion is a proper tail recursion, then all you need to keep track of is the total number of iterations.
So the only hard case is non-tail recursion, which you don't want in general anyway as it makes the stack explode.
in reply to WimⓂ️

And for someone like me, this language, Theseus, is even more interesting:

https://github.com/chessai/theseus

The paper is more detailed:

https://cs.indiana.edu/~sabry/papers/theseus.pdf
in reply to WimⓂ️

@wim_v12e from what I understand, there's no need for garbage collection with time reversible operations?
in reply to Devine Lu Linvega

@wim_v12e that's true to the extent that it's not allowed to deallocate memory at all!
in reply to WimⓂ️

@wim_v12e the language name sounds like coming from an msx game! 😀 - https://www.file-hunter.com/MSX/index.php?id=theseus
in reply to Devine Lu Linvega

@nitrofurano It is named after the ship, yes. The analogy is that computation in this language proceeds by replacing values by apparently equal values.
in reply to Devine Lu Linvega

@alderwick if you look at the hyperreal numbers you can find things that behave sort of like zero and infinity (ε and ω respectively) which do support both multiplication and division.
in reply to Devine Lu Linvega

You could say that zero is any of your primes raised to the power of minus infinity. So 2^3 * 2^(-∞) ≡ 2^(3-∞) ≡ 2^(-∞).
in reply to Devine Lu Linvega

depending on how far through the looking glass you want to go, things like least common multiple and greatest common divisor are also easier to understand in terms of prime powers.
in reply to Devine Lu Linvega

division is the demonstration of the equality of 0 and infinity. All the non-euclideans started waving riemann at me. Sorry, got distracted and startec writing a song.
in reply to Devine Lu Linvega

Well, this interacts with your decision to denote numbers as products of prime powers, but it doesn't really have to do with primes at all.

Those formulas always apply:

(x^a) * (x^b) = x^(a+b)
x^(-a) = 1/(x^a)
(x^a)^b = x^(a*b)

so much so that it's basically just a notation thing, and not even something that requires proof. It's syntactic sugar, in a way.

But, yes, you can write them down as primes and cross out all the things that match.

(x^1)*(x^(-1)) = x^(1-1) = x^0 = 1

yay!

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.