[D] Hyperparameters on attention layer

https://preview.redd.it/l4kobgx6wmae1.png?width=776&format=png&auto=webp&s=7bd1b81b27f09a926eff615ca3b5f119df8b621f

hi, I was recently re-reading the CLIP paper for a project and I came across the hyperparameter definition for the transformers as the image attached.
My understanding of these was:
- Embedding Dimension - the embedding dimension for the space on which tokens are projected
- Layers - Each of the N layers containing # Heads
- Width (here is my doubt) - length of the query, key and value vectors extracted per embedding.

Am I interpreting these values correctly? I had understood Value vector is likely to have a different length to that of key and value. Apologies if this has been asked before, any comments on how hyperparameters on an attention layer are defined would be helpful.

Thank you all!