Posts

Showing posts from August, 2025

8/26/2025

Image
 soo school has started and i also have to work on college applications :| i guess this post will just be mainly some things i've read in school i really like this picture so i'll put it in first source: quanta magazine ten martini problem ! Hofstadter butterfly is a real life phenomenon. scientists wanted to determine the energy levels of an electron in a crystal grid placed near a magnet, so they need to solve a particular formulation of the Schrödinger equation. hofstaeder did a bunch of numericals with the old calculators to produce this butterfly pattern that showed the solutions following the cantor set when the alpha values were irrational, yet no one could prove it, until people "patched" a proof piece by piece, and then finally they got a more elegant proof :D. a week ago i watched Soul by Pixar because it was recommended by my existentialism class' teacher, and it was a really good movie. i love pixar movies!! (well my favorite movie has been inside out ...

8/16/2025

Image
first, i finished my kusudama, and my phone thinks they're plants :skull:, well i guess they are flower balls but... alignment and emergent behaviors quanta article so researchers fine-tuned ai models on an unstable dataset. despite the data being scales of magnitude smaller than the training dataset for the model and the fact that the dataset wasn't explicitly malicious, harmful behavior of the model still emerged, which is very concerning for alignment progress. as much as we try to safeguard existing models, if they can be very easily fine-tuned to produce malicious behavior, it is very concerning especially since people nowadays are so reliant on ai everywhere. but on the flip side, it seems like these models internally somehow understand that they're not ok! furthermore it seems like large models are particularly vulnerable to such attacks. some researchers think thus alignment should focus more on the fragility issue of models. bertrand paradox - more paradoxes! so t...

8/8/2025

Image
some paradoxes:  caroll's pillow problem  -  A bag contains a counter, known to be either white or black. A white counter is put in, the bag is shaken, and a counter is drawn out, which proves to be white. What is now the chance of drawing a white counter? the chance is actually 2/3! coz if we list out all possible scenarios, only 3 are possible with our additional information, and 2 give the condition we want. this is similar to many other paradoxes, like the boy or girl paradox  and the monty hall problem . so many names for essentially the same thing but different objects i guess. Learning distributions with Variational Autoencoders : theory, geometry, and applications - since i prsented at SWIM 2025 , i've been to some of the talks! a variational autoencoder learns a probabilistic latent space that can generate new data samples by modeling the data distribution. it is trained using the Evidence Lower Bound (ELBO) loss function, and an efficiency measure is u...