NXP Backend: Add imxrt700cm backend which combines the Neutron and CortexM backends#18488
Conversation
…ors. The operator `dim_order_ops._clone_dim_order.default` uses the `kwargs` to determine the output dim order. Since the `kwargs` were always empty, this operator produced in incorrect result in the pass, which broke the rest of the model.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18488
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (2 Unrelated Failures)As of commit b45f3f7 with merge base 5e77594 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
b474c94 to
dda9ddd
Compare
dda9ddd to
b45f3f7
Compare
| training, the weights will be stored in the file. | ||
| :param train: Boolean indicating whether to train the model. | ||
| :param num_epochs: Number of epochs to use during training. | ||
| :param cortex_m_safe: There is a bug in the Cortex-M backend related to the `pad` operator. If this parameter is |
There was a problem hiding this comment.
Let's fix this if it is something quick as opposed to introducing a new bypass logic? WDYT - CC @rascani since you were discussing this earlier this week in the context of NHWC.
| from torchao.quantization.pt2e.quantizer.quantizer import Q_ANNOTATION_KEY | ||
|
|
||
|
|
||
| class IMXRT700CMQuantizer(Quantizer): |
There was a problem hiding this comment.
Quantizers are meant to be composible. Recipe is the right user facing abstraction to target an SoC with multiple different backends. Take a look at https://github.com/pytorch/executorch/blob/main/export/tests/test_target_recipes.py especially something like get_android_recipe to understand how two or more quantizers / partitioners are encapsulated and made to work together.
In your case, I imagine a target recipe for rt700 with neutron and cortex-m.
Summary
Add imxrt700cm backend which combines the Neutron and CortexM backends into one. The backend uses Neutron wherever possible, and the leftover nodes are handled by Cortex-M.
Test plan
Unit tests provided
cc @robert-kalmar @JakeStevens @digantdesai