Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Translate recipes #2

Merged
merged 4 commits into from
Jul 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ba51abc8dad17399953f2a24e939f1ec
config: ff98f6fae0a75c232c4a4aa789f50b6c
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file modified docs/.doctrees/environment.pickle
Binary file not shown.
Binary file modified docs/.doctrees/recipes/profile_with_itt.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/Captum_Recipe.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/benchmark.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/dynamic_quantization.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/index.doctree
Binary file not shown.
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/reasoning_about_shapes.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/swap_tensors.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes/tensorboard_with_pytorch.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes_index.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/torch_compile_backend_ipex.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/torch_logs.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/torchscript_inference.doctree
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Reasoning about Shapes in PyTorch\n\nWhen writing models with PyTorch, it is commonly the case that the parameters\nto a given layer depend on the shape of the output of the previous layer. For\nexample, the ``in_features`` of an ``nn.Linear`` layer must match the\n``size(-1)`` of the input. For some layers, the shape computation involves\ncomplex equations, for example convolution operations.\n\nOne way around this is to run the forward pass with random inputs, but this is\nwasteful in terms of memory and compute.\n\nInstead, we can make use of the ``meta`` device to determine the output shapes\nof a layer without materializing any data.\n"
"\n# \u5728PyTorch\u4e2d\u63a8\u7406\u5f62\u72b6\n\n\u5728\u4f7f\u7528PyTorch\u7f16\u5199\u6a21\u578b\u65f6,\u901a\u5e38\u4f1a\u9047\u5230\u67d0\u4e00\u5c42\u7684\u53c2\u6570\u53d6\u51b3\u4e8e\u524d\u4e00\u5c42\u8f93\u51fa\u7684\u5f62\u72b6\u7684\u60c5\u51b5\u3002\u4f8b\u5982,\n``nn.Linear``\u5c42\u7684``in_features``\u5fc5\u987b\u4e0e\u8f93\u5165\u7684``size(-1)``\u76f8\u5339\u914d\u3002\u5bf9\u4e8e\u67d0\u4e9b\u5c42,\u5f62\u72b6\u8ba1\u7b97\u6d89\u53ca\u590d\u6742\u7684\u7b49\u5f0f,\u4f8b\u5982\u5377\u79ef\u8fd0\u7b97\u3002\n\n\u4e00\u79cd\u89e3\u51b3\u65b9\u6cd5\u662f\u4f7f\u7528\u968f\u673a\u8f93\u5165\u8fdb\u884c\u524d\u5411\u4f20\u64ad,\u4f46\u8fd9\u5728\u5185\u5b58\u548c\u8ba1\u7b97\u65b9\u9762\u662f\u6d6a\u8d39\u7684\u3002\n\n\u76f8\u53cd,\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528``meta``\u8bbe\u5907\u6765\u786e\u5b9a\u5c42\u7684\u8f93\u51fa\u5f62\u72b6,\u800c\u65e0\u9700\u5b9e\u9645\u5316\u4efb\u4f55\u6570\u636e\u3002\n"
]
},
{
Expand All @@ -26,14 +26,14 @@
},
"outputs": [],
"source": [
"import torch\nimport timeit\n\nt = torch.rand(2, 3, 10, 10, device=\"meta\")\nconv = torch.nn.Conv2d(3, 5, 2, device=\"meta\")\nstart = timeit.default_timer()\nout = conv(t)\nend = timeit.default_timer()\n\nprint(out)\nprint(f\"Time taken: {end-start}\")"
"import timeit\n\nimport torch\n\nt = torch.rand(2, 3, 10, 10, device=\"meta\")\nconv = torch.nn.Conv2d(3, 5, 2, device=\"meta\")\nstart = timeit.default_timer()\nout = conv(t)\nend = timeit.default_timer()\n\nprint(out)\nprint(f\"\u6240\u9700\u65f6\u95f4: {end-start}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Observe that since data is not materialized, passing arbitrarily large\ninputs will not significantly alter the time taken for shape computation.\n\n"
"\u89c2\u5bdf\u5230,\u7531\u4e8e\u6ca1\u6709\u5b9e\u9645\u5316\u6570\u636e,\u5373\u4f7f\u4f20\u5165\u4efb\u610f\u5927\u7684\u8f93\u5165,\u7528\u4e8e\u5f62\u72b6\u8ba1\u7b97\u7684\u65f6\u95f4\u4e5f\u4e0d\u4f1a\u663e\u8457\u6539\u53d8\u3002\n\n"
]
},
{
Expand All @@ -44,14 +44,14 @@
},
"outputs": [],
"source": [
"t_large = torch.rand(2**10, 3, 2**16, 2**16, device=\"meta\")\nstart = timeit.default_timer()\nout = conv(t_large)\nend = timeit.default_timer()\n\nprint(out)\nprint(f\"Time taken: {end-start}\")"
"t_large = torch.rand(2**10, 3, 2**16, 2**16, device=\"meta\")\nstart = timeit.default_timer()\nout = conv(t_large)\nend = timeit.default_timer()\n\nprint(out)\nprint(f\"\u6240\u9700\u65f6\u95f4: {end-start}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consider an arbitrary network such as the following:\n\n"
"\u8003\u8651\u4ee5\u4e0b\u4efb\u610f\u7f51\u7edc:\n\n"
]
},
{
Expand All @@ -62,14 +62,14 @@
},
"outputs": [],
"source": [
"import torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Net(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = torch.flatten(x, 1) # flatten all dimensions except batch\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x"
"import torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Net(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = torch.flatten(x, 1) # \u5c55\u5e73\u9664\u6279\u6b21\u7ef4\u5ea6\u5916\u7684\u6240\u6709\u7ef4\u5ea6\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can view the intermediate shapes within an entire network by registering a\nforward hook to each layer that prints the shape of the output.\n\n"
"\u6211\u4eec\u53ef\u4ee5\u901a\u8fc7\u4e3a\u6bcf\u4e00\u5c42\u6ce8\u518c\u4e00\u4e2a\u524d\u5411\u94a9\u5b50\u6765\u6253\u5370\u8f93\u51fa\u7684\u5f62\u72b6,\u4ece\u800c\u67e5\u770b\u6574\u4e2a\u7f51\u7edc\u4e2d\u95f4\u5c42\u7684\u5f62\u72b6\u3002\n\n"
]
},
{
Expand All @@ -80,7 +80,7 @@
},
"outputs": [],
"source": [
"def fw_hook(module, input, output):\n print(f\"Shape of output to {module} is {output.shape}.\")\n\n\n# Any tensor created within this torch.device context manager will be\n# on the meta device.\nwith torch.device(\"meta\"):\n net = Net()\n inp = torch.randn((1024, 3, 32, 32))\n\nfor name, layer in net.named_modules():\n layer.register_forward_hook(fw_hook)\n\nout = net(inp)"
"def fw_hook(module, input, output):\n print(f\"{module}\u7684\u8f93\u51fa\u5f62\u72b6\u4e3a{output.shape}\u3002\")\n\n\n# \u5728\u6b64torch.device\u4e0a\u4e0b\u6587\u7ba1\u7406\u5668\u4e2d\u521b\u5efa\u7684\u4efb\u4f55\u5f20\u91cf\u90fd\u5c06\u5728meta\u8bbe\u5907\u4e0a\u3002\nwith torch.device(\"meta\"):\n net = Net()\n inp = torch.randn((1024, 3, 32, 32))\n\nfor name, layer in net.named_modules():\n layer.register_forward_hook(fw_hook)\n\nout = net(inp)"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# (beta) Using TORCH_LOGS python API with torch.compile\n**Author:** [Michael Lazos](https://github.com/mlazos)\n"
"\n# (Beta) \u4f7f\u7528 TORCH_LOGS python API \u4e0e torch.compile\n**\u4f5c\u8005:** [Michael Lazos](https://github.com/mlazos)\n"
]
},
{
Expand All @@ -33,14 +33,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial introduces the ``TORCH_LOGS`` environment variable, as well as the Python API, and\ndemonstrates how to apply it to observe the phases of ``torch.compile``.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>This tutorial requires PyTorch 2.2.0 or later.</p></div>\n\n\n\n"
"\u672c\u6559\u7a0b\u4ecb\u7ecd\u4e86 ``TORCH_LOGS`` \u73af\u5883\u53d8\u91cf\u4ee5\u53ca Python API,\u5e76\u6f14\u793a\u4e86\u5982\u4f55\u5c06\u5176\u5e94\u7528\u4e8e\u89c2\u5bdf ``torch.compile`` \u7684\u5404\u4e2a\u9636\u6bb5\u3002\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>\u672c\u6559\u7a0b\u9700\u8981 PyTorch 2.2.0 \u6216\u66f4\u9ad8\u7248\u672c\u3002</p></div>\n\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\nIn this example, we'll set up a simple Python function which performs an elementwise\nadd and observe the compilation process with ``TORCH_LOGS`` Python API.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>There is also an environment variable ``TORCH_LOGS``, which can be used to\n change logging settings at the command line. The equivalent environment\n variable setting is shown for each example.</p></div>\n\n"
"## \u8bbe\u7f6e\n\u5728\u8fd9\u4e2a\u4f8b\u5b50\u4e2d,\u6211\u4eec\u5c06\u8bbe\u7f6e\u4e00\u4e2a\u7b80\u5355\u7684 Python \u51fd\u6570,\u6267\u884c\u5143\u7d20\u7ea7\u52a0\u6cd5,\u5e76\u4f7f\u7528 ``TORCH_LOGS`` Python API \u89c2\u5bdf\u7f16\u8bd1\u8fc7\u7a0b\u3002\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>\u8fd8\u6709\u4e00\u4e2a\u73af\u5883\u53d8\u91cf ``TORCH_LOGS``,\u53ef\u7528\u4e8e\u5728\u547d\u4ee4\u884c\u4e2d\u66f4\u6539\u65e5\u5fd7\u8bbe\u7f6e\u3002\u6bcf\u4e2a\u793a\u4f8b\u90fd\u663e\u793a\u4e86\u7b49\u6548\u7684\u73af\u5883\u53d8\u91cf\u8bbe\u7f6e\u3002</p></div>\n\n"
]
},
{
Expand All @@ -51,14 +51,14 @@
},
"outputs": [],
"source": [
"import torch\n\n# exit cleanly if we are on a device that doesn't support torch.compile\nif torch.cuda.get_device_capability() < (7, 0):\n print(\"Skipping because torch.compile is not supported on this device.\")\nelse:\n @torch.compile()\n def fn(x, y):\n z = x + y\n return z + 2\n\n\n inputs = (torch.ones(2, 2, device=\"cuda\"), torch.zeros(2, 2, device=\"cuda\"))\n\n\n# print separator and reset dynamo\n# between each example\n def separator(name):\n print(f\"==================={name}=========================\")\n torch._dynamo.reset()\n\n\n separator(\"Dynamo Tracing\")\n# View dynamo tracing\n# TORCH_LOGS=\"+dynamo\"\n torch._logging.set_logs(dynamo=logging.DEBUG)\n fn(*inputs)\n\n separator(\"Traced Graph\")\n# View traced graph\n# TORCH_LOGS=\"graph\"\n torch._logging.set_logs(graph=True)\n fn(*inputs)\n\n separator(\"Fusion Decisions\")\n# View fusion decisions\n# TORCH_LOGS=\"fusion\"\n torch._logging.set_logs(fusion=True)\n fn(*inputs)\n\n separator(\"Output Code\")\n# View output code generated by inductor\n# TORCH_LOGS=\"output_code\"\n torch._logging.set_logs(output_code=True)\n fn(*inputs)\n\n separator(\"\")"
"import torch\n\n# \u5982\u679c\u8bbe\u5907\u4e0d\u652f\u6301 torch.compile,\u5219\u5e72\u51c0\u5730\u9000\u51fa\nif torch.cuda.get_device_capability() < (7, 0):\n print(\"\u8df3\u8fc7,\u56e0\u4e3a\u6b64\u8bbe\u5907\u4e0d\u652f\u6301 torch.compile\u3002\")\nelse:\n\n @torch.compile()\n def fn(x, y):\n z = x + y\n return z + 2\n\n inputs = (torch.ones(2, 2, device=\"cuda\"), torch.zeros(2, 2, device=\"cuda\"))\n\n # \u5728\u6bcf\u4e2a\u793a\u4f8b\u4e4b\u95f4\u6253\u5370\u5206\u9694\u7b26\u5e76\u91cd\u7f6e dynamo\n def separator(name):\n print(f\"==================={name}=========================\")\n torch._dynamo.reset()\n\n separator(\"Dynamo \u8ddf\u8e2a\")\n # \u67e5\u770b dynamo \u8ddf\u8e2a\n # TORCH_LOGS=\"+dynamo\"\n torch._logging.set_logs(dynamo=logging.DEBUG)\n fn(*inputs)\n\n separator(\"\u8ddf\u8e2a\u7684\u56fe\u5f62\")\n # \u67e5\u770b\u8ddf\u8e2a\u7684\u56fe\u5f62\n # TORCH_LOGS=\"graph\"\n torch._logging.set_logs(graph=True)\n fn(*inputs)\n\n separator(\"\u878d\u5408\u51b3\u7b56\")\n # \u67e5\u770b\u878d\u5408\u51b3\u7b56\n # TORCH_LOGS=\"fusion\"\n torch._logging.set_logs(fusion=True)\n fn(*inputs)\n\n separator(\"\u8f93\u51fa\u4ee3\u7801\")\n # \u67e5\u770b inductor \u751f\u6210\u7684\u8f93\u51fa\u4ee3\u7801\n # TORCH_LOGS=\"output_code\"\n torch._logging.set_logs(output_code=True)\n fn(*inputs)\n\n separator(\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n\nIn this tutorial we introduced the TORCH_LOGS environment variable and python API\nby experimenting with a small number of the available logging options.\nTo view descriptions of all available options, run any python script\nwhich imports torch and set TORCH_LOGS to \"help\".\n\nAlternatively, you can view the `torch._logging documentation`_ to see\ndescriptions of all available logging options.\n\nFor more information on torch.compile, see the `torch.compile tutorial`_.\n\n\n"
"## \u7ed3\u8bba\n\n\u5728\u672c\u6559\u7a0b\u4e2d,\u6211\u4eec\u4ecb\u7ecd\u4e86 TORCH_LOGS \u73af\u5883\u53d8\u91cf\u548c python API,\u5e76\u901a\u8fc7\u5b9e\u9a8c\u4e86\u4e00\u5c0f\u90e8\u5206\u53ef\u7528\u7684\u65e5\u5fd7\u9009\u9879\u3002\n\u8981\u67e5\u770b\u6240\u6709\u53ef\u7528\u9009\u9879\u7684\u63cf\u8ff0,\u8bf7\u8fd0\u884c\u4efb\u4f55\u5bfc\u5165 torch \u7684 python \u811a\u672c,\u5e76\u5c06 TORCH_LOGS \u8bbe\u7f6e\u4e3a \"help\"\u3002\n\n\u6216\u8005,\u60a8\u53ef\u4ee5\u67e5\u770b `torch._logging \u6587\u6863`_ \u4ee5\u67e5\u770b\u6240\u6709\u53ef\u7528\u65e5\u5fd7\u9009\u9879\u7684\u63cf\u8ff0\u3002\n\n\u6709\u5173 torch.compile \u7684\u66f4\u591a\u4fe1\u606f,\u8bf7\u53c2\u9605 `torch.compile \u6559\u7a0b`_\u3002\n\n\n"
]
}
],
Expand Down
Loading
Loading