Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full output of error in text editor cannot read unicode characters #153839

Closed
1 of 2 tasks
tlebryk opened this issue Jun 30, 2022 · 7 comments
Closed
1 of 2 tasks

Full output of error in text editor cannot read unicode characters #153839

tlebryk opened this issue Jun 30, 2022 · 7 comments
Assignees
Labels
notebook-output under-discussion Issue is under discussion for relevance, priority, approach

Comments

@tlebryk
Copy link

tlebryk commented Jun 30, 2022

Applies To

  • Notebooks (.ipynb files)
  • Interactive Window and/or Cell Scripts (.py files with #%% markers)

What happened?

When VSCode truncates an error in interactive window, it offers to display the full output in a text editor. By default, however, that output has a bunch of unicode formatting. The vscode text editor parses this file as a plaintext file and does not properly read these characters. There might be some hacky ways around this problem, but it would be great to be able to render long errors in a text file without having to change a bunch of settings.

image
What is rendered when clicking open full output data in a text editor:
image

VS Code Version

Version: 1.68.1 (user setup) Commit: 30d9c6c Date: 2022-06-14T12:48:58.283Z Electron: 17.4.7 Chromium: 98.0.4758.141 Node.js: 16.13.0 V8: 9.8.177.13-electron.0 OS: Windows_NT x64 10.0.22000

Jupyter Extension Version

v2022.5.1001601848

Jupyter logs

No response

Coding Language and Runtime Version

Python 3.10.4

Language Extension Version (if applicable)

No response

Anaconda Version (if applicable)

No response

Running Jupyter locally or remotely?

Local

@tlebryk tlebryk added bug Issue identified by VS Code Team member as probable bug triage-needed labels Jun 30, 2022
@rchiodo
Copy link
Contributor

rchiodo commented Jun 30, 2022

Thanks for the issue. This would be handled by VS code core. The link and the editor is there.

@zt-wang19
Copy link

hello, I've also met this problem today. Is there any update about this issue?

@natwille1
Copy link

Same problem here:

image

@jwnz
Copy link

jwnz commented Oct 27, 2022

As stated in the original post, the problem is due to unicode formatting. For those who need a quick fix, wrap the code in a try/except block and just print the stack trace:

import traceback
try:
    raise Exception("ERROR HERE") # Some code that caused the exception/error
except:
    traceback.print_exc()

You will get the stack trace without all the unicode coloring and formatting and can open and view it in a text file. This was a non-intrusive workaround that was sufficient for my needs.

image

@rebornix rebornix removed the notebook label Dec 3, 2022
@rebornix rebornix added under-discussion Issue is under discussion for relevance, priority, approach and removed bug Issue identified by VS Code Team member as probable bug labels Dec 6, 2022
@liyufan
Copy link

liyufan commented Dec 16, 2022

Same here

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[9], line 5
      2 likelihood = likelihood.to(device)
      3 # model = torch.compile(model)
      4 # likelihood = torch.compile(likelihood)
----> 5 train()

Cell In[7], line 30, in train()
     28 X, y = X.to(device), y.to(device)
     29 optimizer.zero_grad()
---> 30 outs = model(X)
     31 l = -mll(outs, y)
     32 l.backward()

File ~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/module.py:30, in Module.__call__(self, *inputs, **kwargs)
     29 def __call__(self, *inputs, **kwargs):
---> 30     outputs = self.forward(*inputs, **kwargs)
     31     if isinstance(outputs, list):
     32         return [_validate_module_outputs(output) for output in outputs]

File ~/py/mini/../misc/gaussian_process_multidevice.py:70, in DKLModel.forward(self, x)
     68 # This next line makes it so that we learn a GP for each feature
     69 features = features.transpose(-1, -2).unsqueeze(-1)
---> 70 res = self.gp_layer(features)
...
File ~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/cat_linear_operator.py:378, in CatLinearOperator.to_dense(self)
    377 def to_dense(self):
--> 378     return torch.cat([to_dense(L) for L in self.linear_ops], dim=self.cat_dim)

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)

And when I click Open the full output data in a text editor

{
	"name": "RuntimeError",
	"message": "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)",
	"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)\nCell \u001b[0;32mIn[9], line 5\u001b[0m\n\u001b[1;32m      2\u001b[0m likelihood \u001b[39m=\u001b[39m likelihood\u001b[39m.\u001b[39mto(device)\n\u001b[1;32m      3\u001b[0m \u001b[39m# model = torch.compile(model)\u001b[39;00m\n\u001b[1;32m      4\u001b[0m \u001b[39m# likelihood = torch.compile(likelihood)\u001b[39;00m\n\u001b[0;32m----> 5\u001b[0m train()\n\nCell \u001b[0;32mIn[7], line 30\u001b[0m, in \u001b[0;36mtrain\u001b[0;34m()\u001b[0m\n\u001b[1;32m     28\u001b[0m X, y \u001b[39m=\u001b[39m X\u001b[39m.\u001b[39mto(device), y\u001b[39m.\u001b[39mto(device)\n\u001b[1;32m     29\u001b[0m optimizer\u001b[39m.\u001b[39mzero_grad()\n\u001b[0;32m---> 30\u001b[0m outs \u001b[39m=\u001b[39m model(X)\n\u001b[1;32m     31\u001b[0m l \u001b[39m=\u001b[39m \u001b[39m-\u001b[39mmll(outs, y)\n\u001b[1;32m     32\u001b[0m l\u001b[39m.\u001b[39mbackward()\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/module.py:30\u001b[0m, in \u001b[0;36mModule.__call__\u001b[0;34m(self, *inputs, **kwargs)\u001b[0m\n\u001b[1;32m     29\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39m__call__\u001b[39m(\u001b[39mself\u001b[39m, \u001b[39m*\u001b[39minputs, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs):\n\u001b[0;32m---> 30\u001b[0m     outputs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mforward(\u001b[39m*\u001b[39;49minputs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m     31\u001b[0m     \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(outputs, \u001b[39mlist\u001b[39m):\n\u001b[1;32m     32\u001b[0m         \u001b[39mreturn\u001b[39;00m [_validate_module_outputs(output) \u001b[39mfor\u001b[39;00m output \u001b[39min\u001b[39;00m outputs]\n\nFile \u001b[0;32m~/py/mini/../misc/gaussian_process_multidevice.py:70\u001b[0m, in \u001b[0;36mDKLModel.forward\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m     68\u001b[0m \u001b[39m# This next line makes it so that we learn a GP for each feature\u001b[39;00m\n\u001b[1;32m     69\u001b[0m features \u001b[39m=\u001b[39m features\u001b[39m.\u001b[39mtranspose(\u001b[39m-\u001b[39m\u001b[39m1\u001b[39m, \u001b[39m-\u001b[39m\u001b[39m2\u001b[39m)\u001b[39m.\u001b[39munsqueeze(\u001b[39m-\u001b[39m\u001b[39m1\u001b[39m)\n\u001b[0;32m---> 70\u001b[0m res \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mgp_layer(features)\n\u001b[1;32m     71\u001b[0m \u001b[39mreturn\u001b[39;00m res\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/models/approximate_gp.py:108\u001b[0m, in \u001b[0;36mApproximateGP.__call__\u001b[0;34m(self, inputs, prior, **kwargs)\u001b[0m\n\u001b[1;32m    106\u001b[0m \u001b[39mif\u001b[39;00m inputs\u001b[39m.\u001b[39mdim() \u001b[39m==\u001b[39m \u001b[39m1\u001b[39m:\n\u001b[1;32m    107\u001b[0m     inputs \u001b[39m=\u001b[39m inputs\u001b[39m.\u001b[39munsqueeze(\u001b[39m-\u001b[39m\u001b[39m1\u001b[39m)\n\u001b[0;32m--> 108\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mvariational_strategy(inputs, prior\u001b[39m=\u001b[39;49mprior, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/variational/independent_multitask_variational_strategy.py:56\u001b[0m, in \u001b[0;36mIndependentMultitaskVariationalStrategy.__call__\u001b[0;34m(self, x, task_indices, prior, **kwargs)\u001b[0m\n\u001b[1;32m     52\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39m__call__\u001b[39m(\u001b[39mself\u001b[39m, x, task_indices\u001b[39m=\u001b[39m\u001b[39mNone\u001b[39;00m, prior\u001b[39m=\u001b[39m\u001b[39mFalse\u001b[39;00m, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs):\n\u001b[1;32m     53\u001b[0m     \u001b[39mr\u001b[39m\u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m     54\u001b[0m \u001b[39m    See :class:`LMCVariationalStrategy`.\u001b[39;00m\n\u001b[1;32m     55\u001b[0m \u001b[39m    \"\"\"\u001b[39;00m\n\u001b[0;32m---> 56\u001b[0m     function_dist \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mbase_variational_strategy(x, prior\u001b[39m=\u001b[39;49mprior, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m     58\u001b[0m     \u001b[39mif\u001b[39;00m task_indices \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\u001b[1;32m     59\u001b[0m         \u001b[39m# Every data point will get an output for each task\u001b[39;00m\n\u001b[1;32m     60\u001b[0m         \u001b[39mif\u001b[39;00m (\n\u001b[1;32m     61\u001b[0m             \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtask_dim \u001b[39m>\u001b[39m \u001b[39m0\u001b[39m\n\u001b[1;32m     62\u001b[0m             \u001b[39mand\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtask_dim \u001b[39m>\u001b[39m \u001b[39mlen\u001b[39m(function_dist\u001b[39m.\u001b[39mbatch_shape)\n\u001b[1;32m     63\u001b[0m             \u001b[39mor\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtask_dim \u001b[39m<\u001b[39m \u001b[39m0\u001b[39m\n\u001b[1;32m     64\u001b[0m             \u001b[39mand\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtask_dim \u001b[39m+\u001b[39m \u001b[39mlen\u001b[39m(function_dist\u001b[39m.\u001b[39mbatch_shape) \u001b[39m<\u001b[39m \u001b[39m0\u001b[39m\n\u001b[1;32m     65\u001b[0m         ):\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/variational/_variational_strategy.py:289\u001b[0m, in \u001b[0;36m_VariationalStrategy.__call__\u001b[0;34m(self, x, prior, **kwargs)\u001b[0m\n\u001b[1;32m    287\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mvariational_params_initialized\u001b[39m.\u001b[39mitem():\n\u001b[1;32m    288\u001b[0m     prior_dist \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprior_distribution\n\u001b[0;32m--> 289\u001b[0m     \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_variational_distribution\u001b[39m.\u001b[39;49minitialize_variational_distribution(prior_dist)\n\u001b[1;32m    290\u001b[0m     \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mvariational_params_initialized\u001b[39m.\u001b[39mfill_(\u001b[39m1\u001b[39m)\n\u001b[1;32m    292\u001b[0m \u001b[39m# Ensure inducing_points and x are the same size\u001b[39;00m\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/gpytorch/variational/cholesky_variational_distribution.py:53\u001b[0m, in \u001b[0;36mCholeskyVariationalDistribution.initialize_variational_distribution\u001b[0;34m(self, prior_dist)\u001b[0m\n\u001b[1;32m     51\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mvariational_mean\u001b[39m.\u001b[39mdata\u001b[39m.\u001b[39mcopy_(prior_dist\u001b[39m.\u001b[39mmean)\n\u001b[1;32m     52\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mvariational_mean\u001b[39m.\u001b[39mdata\u001b[39m.\u001b[39madd_(torch\u001b[39m.\u001b[39mrandn_like(prior_dist\u001b[39m.\u001b[39mmean), alpha\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mmean_init_std)\n\u001b[0;32m---> 53\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mchol_variational_covar\u001b[39m.\u001b[39mdata\u001b[39m.\u001b[39mcopy_(prior_dist\u001b[39m.\u001b[39;49mlazy_covariance_matrix\u001b[39m.\u001b[39;49mcholesky()\u001b[39m.\u001b[39mto_dense())\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py:1229\u001b[0m, in \u001b[0;36mLinearOperator.cholesky\u001b[0;34m(self, upper)\u001b[0m\n\u001b[1;32m   1221\u001b[0m \u001b[39m@_implements\u001b[39m(torch\u001b[39m.\u001b[39mlinalg\u001b[39m.\u001b[39mcholesky)\n\u001b[1;32m   1222\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mcholesky\u001b[39m(\u001b[39mself\u001b[39m, upper: \u001b[39mbool\u001b[39m \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m) \u001b[39m-\u001b[39m\u001b[39m>\u001b[39m \u001b[39m\"\u001b[39m\u001b[39mTriangularLinearOperator\u001b[39m\u001b[39m\"\u001b[39m:  \u001b[39m# noqa F811\u001b[39;00m\n\u001b[1;32m   1223\u001b[0m     \u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m   1224\u001b[0m \u001b[39m    Cholesky-factorizes the LinearOperator.\u001b[39;00m\n\u001b[1;32m   1225\u001b[0m \n\u001b[1;32m   1226\u001b[0m \u001b[39m    :param upper: Upper triangular or lower triangular factor (default: False).\u001b[39;00m\n\u001b[1;32m   1227\u001b[0m \u001b[39m    :return: Cholesky factor (lower or upper triangular)\u001b[39;00m\n\u001b[1;32m   1228\u001b[0m \u001b[39m    \"\"\"\u001b[39;00m\n\u001b[0;32m-> 1229\u001b[0m     chol \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_cholesky(upper\u001b[39m=\u001b[39;49m\u001b[39mFalse\u001b[39;49;00m)\n\u001b[1;32m   1230\u001b[0m     \u001b[39mif\u001b[39;00m upper:\n\u001b[1;32m   1231\u001b[0m         chol \u001b[39m=\u001b[39m chol\u001b[39m.\u001b[39m_transpose_nonbatch()\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/utils/memoize.py:59\u001b[0m, in \u001b[0;36m_cached.<locals>.g\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m     57\u001b[0m kwargs_pkl \u001b[39m=\u001b[39m pickle\u001b[39m.\u001b[39mdumps(kwargs)\n\u001b[1;32m     58\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mnot\u001b[39;00m _is_in_cache(\u001b[39mself\u001b[39m, cache_name, \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl):\n\u001b[0;32m---> 59\u001b[0m     \u001b[39mreturn\u001b[39;00m _add_to_cache(\u001b[39mself\u001b[39m, cache_name, method(\u001b[39mself\u001b[39;49m, \u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs), \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl)\n\u001b[1;32m     60\u001b[0m \u001b[39mreturn\u001b[39;00m _get_from_cache(\u001b[39mself\u001b[39m, cache_name, \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl)\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py:483\u001b[0m, in \u001b[0;36mLinearOperator._cholesky\u001b[0;34m(self, upper)\u001b[0m\n\u001b[1;32m    480\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39many\u001b[39m(\u001b[39misinstance\u001b[39m(sub_mat, KeOpsLinearOperator) \u001b[39mfor\u001b[39;00m sub_mat \u001b[39min\u001b[39;00m evaluated_kern_mat\u001b[39m.\u001b[39m_args):\n\u001b[1;32m    481\u001b[0m     \u001b[39mraise\u001b[39;00m \u001b[39mRuntimeError\u001b[39;00m(\u001b[39m\"\u001b[39m\u001b[39mCannot run Cholesky with KeOps: it will either be really slow or not work.\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 483\u001b[0m evaluated_mat \u001b[39m=\u001b[39m evaluated_kern_mat\u001b[39m.\u001b[39;49mto_dense()\n\u001b[1;32m    485\u001b[0m \u001b[39m# if the tensor is a scalar, we can just take the square root\u001b[39;00m\n\u001b[1;32m    486\u001b[0m \u001b[39mif\u001b[39;00m evaluated_mat\u001b[39m.\u001b[39msize(\u001b[39m-\u001b[39m\u001b[39m1\u001b[39m) \u001b[39m==\u001b[39m \u001b[39m1\u001b[39m:\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/utils/memoize.py:59\u001b[0m, in \u001b[0;36m_cached.<locals>.g\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m     57\u001b[0m kwargs_pkl \u001b[39m=\u001b[39m pickle\u001b[39m.\u001b[39mdumps(kwargs)\n\u001b[1;32m     58\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mnot\u001b[39;00m _is_in_cache(\u001b[39mself\u001b[39m, cache_name, \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl):\n\u001b[0;32m---> 59\u001b[0m     \u001b[39mreturn\u001b[39;00m _add_to_cache(\u001b[39mself\u001b[39m, cache_name, method(\u001b[39mself\u001b[39;49m, \u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs), \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl)\n\u001b[1;32m     60\u001b[0m \u001b[39mreturn\u001b[39;00m _get_from_cache(\u001b[39mself\u001b[39m, cache_name, \u001b[39m*\u001b[39margs, kwargs_pkl\u001b[39m=\u001b[39mkwargs_pkl)\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/sum_linear_operator.py:68\u001b[0m, in \u001b[0;36mSumLinearOperator.to_dense\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     66\u001b[0m \u001b[39m@cached\u001b[39m\n\u001b[1;32m     67\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mto_dense\u001b[39m(\u001b[39mself\u001b[39m):\n\u001b[0;32m---> 68\u001b[0m     \u001b[39mreturn\u001b[39;00m (\u001b[39msum\u001b[39;49m(linear_op\u001b[39m.\u001b[39;49mto_dense() \u001b[39mfor\u001b[39;49;00m linear_op \u001b[39min\u001b[39;49;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mlinear_ops))\u001b[39m.\u001b[39mcontiguous()\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/sum_linear_operator.py:68\u001b[0m, in \u001b[0;36m<genexpr>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m     66\u001b[0m \u001b[39m@cached\u001b[39m\n\u001b[1;32m     67\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mto_dense\u001b[39m(\u001b[39mself\u001b[39m):\n\u001b[0;32m---> 68\u001b[0m     \u001b[39mreturn\u001b[39;00m (\u001b[39msum\u001b[39m(linear_op\u001b[39m.\u001b[39;49mto_dense() \u001b[39mfor\u001b[39;00m linear_op \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlinear_ops))\u001b[39m.\u001b[39mcontiguous()\n\nFile \u001b[0;32m~/anaconda3/envs/torch/lib/python3.10/site-packages/linear_operator/operators/cat_linear_operator.py:378\u001b[0m, in \u001b[0;36mCatLinearOperator.to_dense\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m    377\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mto_dense\u001b[39m(\u001b[39mself\u001b[39m):\n\u001b[0;32m--> 378\u001b[0m     \u001b[39mreturn\u001b[39;00m torch\u001b[39m.\u001b[39;49mcat([to_dense(L) \u001b[39mfor\u001b[39;49;00m L \u001b[39min\u001b[39;49;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mlinear_ops], dim\u001b[39m=\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mcat_dim)\n\n\u001b[0;31mRuntimeError\u001b[0m: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)"
}

Hope to fix this.

@rebornix
Copy link
Member

We would introduce output scrolling microsoft/vscode-jupyter#4406 , with which you don't have to view its raw text form.

@andrew-weisman
Copy link

andrew-weisman commented Jan 23, 2023

@rebornix So exactly what do we do to fix this problem? Thanks!

@github-actions github-actions bot locked and limited conversation to collaborators Feb 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
notebook-output under-discussion Issue is under discussion for relevance, priority, approach
Projects
None yet
Development

No branches or pull requests