You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is part of a larger token file, stripped down to just the tokens that demonstrate the issue. The original tokens were export with 'single file' export from Tokens Studio.
The issue is that there are some tokens that share the same path prefixes, with the latter set containing the earlier set's path fully as a prefix:
alias.json
type.sans serif -> "{type.font family.Roboto}"
type.serif -> "{type.font family.Georgia}"
type.mono -> "{type.font family.IBM Plex Mono}"
mapping.json
type.sans serif.xl.size -> "{type.sizes.42}"
type.sans serif.lg.size -> "{type.sizes.32}"
type.sans serif.md.size -> "{type.sizes.28}"
etc.
The expected output in CSS variables would be something like this:
However, that is not what happens. Instead the output is just this, silently leaving out all tokens that have paths deeper than the type.sans serif / type.serif / etc. tokens:
However, if the first set of tokens is modified to have a prefix like family (e.g. type.sans serif family), making the paths not conflict, then the output is close to what was originally expected:
I don't know much about this dictionary schema, but it seems like the "xl" path should not be mixed in with the other properties, because "xl" is part of a token path, and not a meta property describing the token. In fact, it is possible to overwrite meta properties like filePath by manually changing tokens to that path segment at that level:
Hopefully this is enough to demonstrate that something is going wrong with the token merging, and maybe some clues as to where it is occurring. The results that I would expect in this situation are one of the following:
All of the tokens get output correctly, even if they share path prefixes (but don't collide, given their full paths).
Only some of the tokens get output, but there are clear collision errors that signal what is wrong with the input files.
The text was updated successfully, but these errors were encountered:
It would be great indeed if token groups when colliding with tokens would also throw collision warnings, we'd have to dive a bit into our deepExtend utility to make that happen. If someone dares to try, feel free to raise a PR
Copying the most relevant comment from that issue to clarify on what the problem is:
I think I know what the problem is, loading all the input files for style-dictionary, it merges everything into a single dictionary structure.
{foo: {bar: {value: ...
}}}
merged with:
{foo: {value: ...
}}
Is essentially what you're doing in that reproduction, merging a token with a token group on the same name. I think this issue can be worked around by simply changing your token structures a bit to not cause cross-file collisions like that?
Yes, the workaround to fix it in any specific case is simple; that is, ensuring they do not share prefixes. The concern here is that it is easy to introduce the issue unknowingly (because the token structure makes sense and works fine in Figma), and hard to detect it because no error is thrown. We'll only notice when the token is missing later on in the pipeline.
Using latest v4 (4.0.1).
Minimal repro here: https://configurator.tokens.studio/#project=zVddb5swFP0rFi/ZKgINIW2VvUxTtWoPlarlcakmB0zqDGxkmypZxH+fDaEBJ3W00g/ngcDlXPuc4+srs3UYWmIuEJtdfy9IJDAl3FtxZ+rgLKdMgJnYpOgaV28g24CE0QwMuIoO46fw4Muc7BK2oBkSlDv0V0H/IMKHXBQxpj6Ph4JBwhPKMq4y58T3QSfqgoDEIIcMZkiNJIOA5hU7EEECFgjAOEZxlTlDCMQ04lPwIETOp76/xOKhWHgRzXzD1HPSMP2kqfwsSTmuE1GS4KW0gxJpyHZOAJg7nBYsQnNnCn6pgAqdnflnFWruqNC9WyNzhnJGI8Q5ZbyT0CGlJaVQ1OxkwrZJiHj7uRqikXHDaJGrlwfDunu0pJLgdY3icefVosBpfAfFQ/22evTlfH4HleAUtTWo33Z/qyAx4gITqCysh/r9CBmGC5noVfzdLl6Rh6KGqvme0LUh9a9sbu/rm9JtLFn1c+So7NUrqN6LXpk1r+Aj5BHDufARvzCKVn/yUsqaXKYUjEEG8xyTpZ/q1Sk2OepUDpemAI4YTjS71mknUGHxX6QFQaUoLVDNeavG9xSOe2FQHuhrppe+4Ew6X7miYeSuRmTnga7mACztrSx6sveoT2VrzdQgvXSN7dSVxT11BVdW6uJZX12hlbrWvK+uc3t06c33TRrKpZ0L2buhTOzceL0bSmhRgb5qQ7FzvXo3lJFFuo4fbAIAUwy5X11ffLI5kJ9QIkACM5xuvJ90QQUtO8e8vQkCrUX34Kxpbxhez4LhpIN8Vri5axrJ3iDKlhh+DNuMEvo/ZH98uwV3KVqDW5n4rpS1KhqBhfwUiP3qeqqKWgo0sXWlHOy5lgeDGjJwgarF4W553Tb6WeXHtDfEde0n9lW7Sewqxsh6h1G0LWDcKRsj7w5SslcFynOoPsbfX8HB3lYtVqugUWDSc+5dThjKniV/vCG/yRqMQjPRK3uYXpiYjmxheWVk6Y0CW/wMzs1M7SEamImOrSnRIDQztYbniSK1xtDxiRq1pzuNjUUaWMIyNPoZWFOgodlN78Ka5hQat9LYEpYTM0uL/Lw0ttDwo5nqnx3lPw==
This is part of a larger token file, stripped down to just the tokens that demonstrate the issue. The original tokens were export with 'single file' export from Tokens Studio.
The issue is that there are some tokens that share the same path prefixes, with the latter set containing the earlier set's path fully as a prefix:
alias.json
type.sans serif
->"{type.font family.Roboto}"
type.serif
->"{type.font family.Georgia}"
type.mono
->"{type.font family.IBM Plex Mono}"
mapping.json
type.sans serif.xl.size
->"{type.sizes.42}"
type.sans serif.lg.size
->"{type.sizes.32}"
type.sans serif.md.size
->"{type.sizes.28}"
The expected output in CSS variables would be something like this:
Expected output
However, that is not what happens. Instead the output is just this, silently leaving out all tokens that have paths deeper than the
type.sans serif
/type.serif
/ etc. tokens:Actual output
However, if the first set of tokens is modified to have a prefix like
family
(e.g.type.sans serif family
), making the paths not conflict, then the output is close to what was originally expected:Actual output (with manual workaround)
This is easy to test using the above Configurator project.
Some clues as to what might be going wrong are shown when inspecting the dictionary given to a custom formatter:
I don't know much about this dictionary schema, but it seems like the
"xl"
path should not be mixed in with the other properties, because"xl"
is part of a token path, and not a meta property describing the token. In fact, it is possible to overwrite meta properties like filePath by manually changing tokens to that path segment at that level:lg.json
Dictionary (as printed from custom formatter)
Hopefully this is enough to demonstrate that something is going wrong with the token merging, and maybe some clues as to where it is occurring. The results that I would expect in this situation are one of the following:
The text was updated successfully, but these errors were encountered: