You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue explains a bug in torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py
We found this while importing an exported model into MLIR. This occurs for an exported MultiheadAttention layer with "NeedWeight = false" which means weights are not going to be returned by the layer. So, the second output attn_output_weights will be None in this case.
The following error is raised:
Python Error: NotImplementedError: OutputKind.USER_OUTPUT for <class
'torch.export.graph_signature.ConstantArgument'>: ConstantArgument(name='',
value=None)
[Additionally, I couldn't visualize the exported model as .pt2 using a tool like https://netron.app/,
However, I am able to import the exported model and visualize it when "NeedWeight = true", i.e. attn_output_weights will not be None in this case]
The error occurs due to a missing case in lines # 661, 662 in the source code below (torch.export.graph_signature.ConstantArgument is not handled)
torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py
Before, proposing code changes to solve this issue, we wanted to check the expected behavior and confirm whether the OutputSpec is intentionally handled this way in the source code or if it's an actual bug that needs to be fixed.
This issue explains a bug in torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py
We found this while importing an exported model into MLIR. This occurs for an exported MultiheadAttention layer with "NeedWeight = false" which means weights are not going to be returned by the layer. So, the second output attn_output_weights will be None in this case.
The following error is raised:
Python Error: NotImplementedError: OutputKind.USER_OUTPUT for <class
'torch.export.graph_signature.ConstantArgument'>: ConstantArgument(name='',
value=None)
[Additionally, I couldn't visualize the exported model as .pt2 using a tool like https://netron.app/,
However, I am able to import the exported model and visualize it when "NeedWeight = true", i.e. attn_output_weights will not be None in this case]
doc: https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html
parameters:
need_weights: [bool] If specified, returns attn_output_weights
outputs:
attn_output_weights: Only returned when need_weights=True.
Source code to reproduce the exported model with attn_output_weights = None
The error occurs due to a missing case in lines # 661, 662 in the source code below (torch.export.graph_signature.ConstantArgument is not handled)

torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py
Before, proposing code changes to solve this issue, we wanted to check the expected behavior and confirm whether the OutputSpec is intentionally handled this way in the source code or if it's an actual bug that needs to be fixed.
This is a snippet from the exported program
We noticed that OutputSpec has enum below while the source code handles only two types of the enum below (TensorArgument, and SymIntArgument)

https://pytorch.org/docs/stable/export.html#torch.export.graph_signature.OutputSpec
The text was updated successfully, but these errors were encountered: