When Massive Parallelism Is Not Enough: Optimizing the Hamming Matrix
Computing the Hamming Matrix suits GPU parallelism, but its low arithmetic intensity demands careful memory optimization to beat CPUs.
Computing the Hamming Matrix suits GPU parallelism, but its low arithmetic intensity demands careful memory optimization to beat CPUs.
We train an Autoencoder on Cell Reprogramming Sequencing Data and visualize the Latent Space using an interactive UMAP.
Adapting a Vision Transformer to find image rendering settings suitable for HDR image editing from small training datasets.
Investigating the Behaviour of an MLP under a novel Skip Configuration.