Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token path collisions can occur without any errors, when some tokens are prefixes of others' paths #1290

Open
jonrajavuori-academia opened this issue Jul 27, 2024 · 4 comments

Comments

@jonrajavuori-academia
Copy link

Using latest v4 (4.0.1).

Minimal repro here: https://configurator.tokens.studio/#project=zVddb5swFP0rFi/ZKgINIW2VvUxTtWoPlarlcakmB0zqDGxkmypZxH+fDaEBJ3W00g/ngcDlXPuc4+srs3UYWmIuEJtdfy9IJDAl3FtxZ+rgLKdMgJnYpOgaV28g24CE0QwMuIoO46fw4Muc7BK2oBkSlDv0V0H/IMKHXBQxpj6Ph4JBwhPKMq4y58T3QSfqgoDEIIcMZkiNJIOA5hU7EEECFgjAOEZxlTlDCMQ04lPwIETOp76/xOKhWHgRzXzD1HPSMP2kqfwsSTmuE1GS4KW0gxJpyHZOAJg7nBYsQnNnCn6pgAqdnflnFWruqNC9WyNzhnJGI8Q5ZbyT0CGlJaVQ1OxkwrZJiHj7uRqikXHDaJGrlwfDunu0pJLgdY3icefVosBpfAfFQ/22evTlfH4HleAUtTWo33Z/qyAx4gITqCysh/r9CBmGC5noVfzdLl6Rh6KGqvme0LUh9a9sbu/rm9JtLFn1c+So7NUrqN6LXpk1r+Aj5BHDufARvzCKVn/yUsqaXKYUjEEG8xyTpZ/q1Sk2OepUDpemAI4YTjS71mknUGHxX6QFQaUoLVDNeavG9xSOe2FQHuhrppe+4Ew6X7miYeSuRmTnga7mACztrSx6sveoT2VrzdQgvXSN7dSVxT11BVdW6uJZX12hlbrWvK+uc3t06c33TRrKpZ0L2buhTOzceL0bSmhRgb5qQ7FzvXo3lJFFuo4fbAIAUwy5X11ffLI5kJ9QIkACM5xuvJ90QQUtO8e8vQkCrUX34Kxpbxhez4LhpIN8Vri5axrJ3iDKlhh+DNuMEvo/ZH98uwV3KVqDW5n4rpS1KhqBhfwUiP3qeqqKWgo0sXWlHOy5lgeDGjJwgarF4W553Tb6WeXHtDfEde0n9lW7Sewqxsh6h1G0LWDcKRsj7w5SslcFynOoPsbfX8HB3lYtVqugUWDSc+5dThjKniV/vCG/yRqMQjPRK3uYXpiYjmxheWVk6Y0CW/wMzs1M7SEamImOrSnRIDQztYbniSK1xtDxiRq1pzuNjUUaWMIyNPoZWFOgodlN78Ka5hQat9LYEpYTM0uL/Lw0ttDwo5nqnx3lPw==

This is part of a larger token file, stripped down to just the tokens that demonstrate the issue. The original tokens were export with 'single file' export from Tokens Studio.

The issue is that there are some tokens that share the same path prefixes, with the latter set containing the earlier set's path fully as a prefix:

alias.json

  • type.sans serif -> "{type.font family.Roboto}"
  • type.serif -> "{type.font family.Georgia}"
  • type.mono -> "{type.font family.IBM Plex Mono}"

mapping.json

  • type.sans serif.xl.size -> "{type.sizes.42}"
  • type.sans serif.lg.size -> "{type.sizes.32}"
  • type.sans serif.md.size -> "{type.sizes.28}"
  • etc.

The expected output in CSS variables would be something like this:

Expected output

// ...
  --type-sizes-58: 3.625rem;
  --type-sizes-74: 4.625rem;
  --type-sans-serif: 'Roboto', sans-serif;
  --type-serif: 'Georgia', serif;
  --type-mono: 'IBM Plex Mono', monospace;
  --type-sans-serif-xl-size: 2.625rem;
  --type-sans-serif-lg-size: 2rem;
  --type-sans-serif-md-size: 1.75rem;
  --type-sans-serif-sm-size: 1.5rem;
  --type-sans-serif-xs-size: 1.25rem;
// ...

However, that is not what happens. Instead the output is just this, silently leaving out all tokens that have paths deeper than the type.sans serif / type.serif / etc. tokens:

Actual output

// ...
  --type-sizes-58: 3.625rem;
  --type-sizes-74: 4.625rem;
  --type-sans-serif: 'Roboto', sans-serif;
  --type-serif: 'Georgia', serif;
  --type-mono: 'IBM Plex Mono', monospace;

/* File ends */

However, if the first set of tokens is modified to have a prefix like family (e.g. type.sans serif family), making the paths not conflict, then the output is close to what was originally expected:

Actual output (with manual workaround)

// ...
  --type-sizes-58: 3.625rem;
  --type-sizes-74: 4.625rem;
  --type-sans-serif-family: 'Roboto', sans-serif;
  --type-serif-family: 'Georgia', serif;
  --type-mono-family: 'IBM Plex Mono', monospace;
  --type-sans-serif-xl-size: 2.625rem;
  --type-sans-serif-lg-size: 2rem;
  --type-sans-serif-md-size: 1.75rem;
  --type-sans-serif-sm-size: 1.5rem;
  --type-sans-serif-xs-size: 1.25rem;
  --type-serif-xl-size: 4.625rem;
// ...

This is easy to test using the above Configurator project.

Some clues as to what might be going wrong are shown when inspecting the dictionary given to a custom formatter:

StyleDictionary.registerFormat({
  name: 'custom/css',
  format: async ({ dictionary, file, options }) => {
    const { outputReferences } = options;

    console.log("dictionary:\n", JSON.stringify(dictionary, null, 2));

    return (
      (await fileHeader({ file })) +
      `:root {\n` +
      formattedVariables({ format: 'css', dictionary, outputReferences }) +
      '\n}\n'
    );
  },
});
...
     "sans serif": {
        "value": "'Roboto', sans-serif",
        "type": "text",
        "parent": "glo 2 alias/DS2-5",
        "description": "",
        "filePath": "app/assets/stylesheets/design_system_v2/tokens/2_5/glo 2 alias/DS2-5.json",
        "isSource": true,
        "xl": {
          "size": {
            "value": "2.625rem",
            "type": "dimension",
            "parent": "glo 3 mapping/lg",
            "description": "",
            "filePath": "app/assets/stylesheets/design_system_v2/tokens/2_5/glo 3 mapping/lg.json",
            "isSource": true
          },
...

I don't know much about this dictionary schema, but it seems like the "xl" path should not be mixed in with the other properties, because "xl" is part of a token path, and not a meta property describing the token. In fact, it is possible to overwrite meta properties like filePath by manually changing tokens to that path segment at that level:

lg.json

{
  "type": {
    "sans serif": {
      "filePath": {
        "size": {
          "value": "{type.sizes.42}",
          "type": "dimension",
          "parent": "glo 3 mapping/lg",
          "description": ""
        }
      },
...

Dictionary (as printed from custom formatter)

...
      },
      "sans serif": {
        "value": "'Roboto', sans-serif",
        "type": "content",
        "parent": "glo 2 alias/DS2-5",
        "description": "",
        "filePath": {
          "size": {
            "value": "2.625rem",
            "type": "dimension",
            "parent": "glo 3 mapping/lg",
            "description": "",
            "filePath": "app/assets/stylesheets/design_system_v2/tokens/2_5/glo 3 mapping/lg.json",
            "isSource": true
          }
        },
        "isSource": true,
        "lg": {
          "size": {
...

Hopefully this is enough to demonstrate that something is going wrong with the token merging, and maybe some clues as to where it is occurring. The results that I would expect in this situation are one of the following:

  • All of the tokens get output correctly, even if they share path prefixes (but don't collide, given their full paths).
  • Only some of the tokens get output, but there are clear collision errors that signal what is wrong with the input files.
@jorenbroekema
Copy link
Collaborator

Duplicate of tokens-studio/sd-transforms#275 but not raised yet in this repository.

It would be great indeed if token groups when colliding with tokens would also throw collision warnings, we'd have to dive a bit into our deepExtend utility to make that happen. If someone dares to try, feel free to raise a PR

@jorenbroekema
Copy link
Collaborator

jorenbroekema commented Jul 30, 2024

Copying the most relevant comment from that issue to clarify on what the problem is:


I think I know what the problem is, loading all the input files for style-dictionary, it merges everything into a single dictionary structure.

{
  foo: {
    bar: {
      value: ...
    }
  }
}

merged with:

{
  foo: {
    value: ...
  }
}

Is essentially what you're doing in that reproduction, merging a token with a token group on the same name. I think this issue can be worked around by simply changing your token structures a bit to not cause cross-file collisions like that?

@jonrajavuori-academia
Copy link
Author

Yes, the workaround to fix it in any specific case is simple; that is, ensuring they do not share prefixes. The concern here is that it is easy to introduce the issue unknowingly (because the token structure makes sense and works fine in Figma), and hard to detect it because no error is thrown. We'll only notice when the token is missing later on in the pipeline.

@jorenbroekema
Copy link
Collaborator

Yeah my plan is to throw a fatal error during SD's parsing process telling you exactly where the collision is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants