Skip to content

Instantly share code, notes, and snippets.

@Wulfsta
Created April 23, 2021 18:29
Show Gist options
  • Select an option

  • Save Wulfsta/5d8b2b902e1068433aaf737657453a25 to your computer and use it in GitHub Desktop.

Select an option

Save Wulfsta/5d8b2b902e1068433aaf737657453a25 to your computer and use it in GitHub Desktop.
pytorch_1.8.1_test_results.log
This file has been truncated, but you can view the full file.
$ PYTORCH_TEST_WITH_ROCM=1 python3 test/run_test.py --verbose --continue-through-error
Fail to import hypothesis in common_utils, tests are not derandomized
Excluding distributed/nn/jit/test_instantiator on ROCm
Excluding distributed/rpc/test_faulty_agent on ROCm
Excluding distributed/rpc/test_process_group_agent on ROCm
Excluding distributed/rpc/test_tensorpipe_agent on ROCm
Excluding test_determination on ROCm
Excluding test_multiprocessing on ROCm
Excluding test_multiprocessing_spawn on ROCm
Excluding test_jit_legacy on ROCm
Excluding test_type_hints on ROCm
Excluding test_openmp on ROCm
Selected tests: test_autograd, benchmark_utils/test_benchmark_utils, test_binary_ufuncs, test_bundled_inputs, test_complex, test_cpp_api_parity, test_cpp_extensions_aot_no_ninja, test_cpp_extensions_aot_ninja, test_cpp_extensions_jit, distributed/test_c10d, distributed/test_jit_c10d, distributed/test_c10d_spawn, test_cuda, test_jit_cuda_fuser, test_cuda_primary_ctx, test_dataloader, test_dataset, test_datapipe, distributed/test_data_parallel, distributed/test_distributed_fork, distributed/test_distributed_spawn, distributions/test_constraints, distributions/test_distributions, test_dispatch, test_expecttest, test_foreach, test_indexing, test_jit, test_linalg, test_logging, test_mkldnn, distributed/test_nccl, test_native_functions, test_numba_integration, test_nn, test_ops, test_optim, test_pytree, test_mobile_optimizer, test_xnnpack_integration, test_vulkan, test_sparse, test_quantization, test_pruning_op, test_spectral_ops, test_serialization, test_shape_ops, test_show_pickle, test_sort_and_select, test_tensor_creation_ops, test_testing, test_torch, test_type_info, test_unary_ufuncs, test_utils, test_view_ops, test_vmap, test_namedtuple_return_api, test_numpy_interop, test_jit_profiling, test_jit_fuser_legacy, test_tensorboard, test_namedtensor, test_reductions, test_type_promotion, test_jit_disabled, test_function_schema, test_op_aliases, test_overrides, test_jit_fuser_te, test_tensorexpr, test_tensorexpr_pybind, test_profiler, test_jit_py3, test_futures, test_fx, test_fx_experimental, test_functional_autograd_benchmark, test_package, test_license, distributed/pipeline/sync/skip/test_api, distributed/pipeline/sync/skip/test_gpipe, distributed/pipeline/sync/skip/test_inspect_skip_layout, distributed/pipeline/sync/skip/test_leak, distributed/pipeline/sync/skip/test_portal, distributed/pipeline/sync/skip/test_stash_pop, distributed/pipeline/sync/skip/test_tracker, distributed/pipeline/sync/skip/test_verify_skippables, distributed/pipeline/sync/test_balance, distributed/pipeline/sync/test_bugs, distributed/pipeline/sync/test_checkpoint, distributed/pipeline/sync/test_copy, distributed/pipeline/sync/test_deferred_batch_norm, distributed/pipeline/sync/test_dependency, distributed/pipeline/sync/test_inplace, distributed/pipeline/sync/test_microbatch, distributed/pipeline/sync/test_phony, distributed/pipeline/sync/test_pipe, distributed/pipeline/sync/test_pipeline, distributed/pipeline/sync/test_stream, distributed/pipeline/sync/test_transparency, distributed/pipeline/sync/test_worker, distributed/optim/test_zero_redundancy_optimizer
Running test_autograd ... [2021-04-23 12:52:29.618284]
Executing ['/usr/bin/python3', 'test_autograd.py', '-v'] ... [2021-04-23 12:52:29.618301]
test_accumulate_grad (__main__.TestAutograd) ... ok
test_accumulate_grad_tensor_reference (__main__.TestAutograd) ... ok
test_anomaly_assign_parent_cleanup (__main__.TestAutograd) ... test_autograd.py:3707: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging.
with detect_anomaly():
ok
test_anomaly_detect_nan (__main__.TestAutograd) ... ok
test_anomaly_grad_warnings (__main__.TestAutograd) ... ok
test_as_strided (__main__.TestAutograd) ... ok
test_attribute_deletion (__main__.TestAutograd) ... ok
test_autograd_complex_views_python (__main__.TestAutograd) ... ok
test_autograd_inplace_views_python (__main__.TestAutograd) ... ok
test_autograd_simple_views_python (__main__.TestAutograd) ... ok
test_autograd_views_codegen (__main__.TestAutograd) ... ok
test_backward (__main__.TestAutograd) ... ok
test_backward_badcalls (__main__.TestAutograd) ... ok
test_backward_copy (__main__.TestAutograd) ... ok
test_backward_no_grad (__main__.TestAutograd) ... ok
test_backward_twice_retained_graph_with_saved_values (__main__.TestAutograd) ... ok
test_backward_twice_retained_graph_without_saved_values (__main__.TestAutograd) ... ok
test_backward_twice_with_saved_values (__main__.TestAutograd) ... ok
test_backward_twice_without_saved_values (__main__.TestAutograd) ... ok
test_backward_with_inputs (__main__.TestAutograd) ... ok
test_backward_with_nonleaf_inputs (__main__.TestAutograd) ... ok
test_block_diag (__main__.TestAutograd) ... ok
test_broadcast_tensors (__main__.TestAutograd) ... ok
test_cat (__main__.TestAutograd) ... ok
test_cat_empty (__main__.TestAutograd) ... ok
test_cat_empty_legacy (__main__.TestAutograd) ... ok
test_cat_negdim_1 (__main__.TestAutograd) ... ok
test_cat_negdim_2 (__main__.TestAutograd) ... ok
test_chain_matmul (__main__.TestAutograd) ... ok
test_checkpointing (__main__.TestAutograd) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_custom_autograd_no_early_free (__main__.TestAutograd) ... ok
test_custom_autograd_repeated_grad_grad (__main__.TestAutograd) ... ok
test_custom_function_error (__main__.TestAutograd) ... ok
test_custom_function_exception (__main__.TestAutograd) ... ok
test_custom_function_local_inplace (__main__.TestAutograd) ... ok
test_custom_function_return_view_in_nograd (__main__.TestAutograd) ... ok
test_deep_reentrant (__main__.TestAutograd) ... ok
test_dep_nograd (__main__.TestAutograd) ... ok
test_dependent_backward (__main__.TestAutograd) ... ok
test_detach (__main__.TestAutograd) ... ok
test_detach_base (__main__.TestAutograd)
detaching base does not detach view ... ok
test_diagonal_derivative_requires_grad (__main__.TestAutograd) ... ok
test_diagonal_expanded_v (__main__.TestAutograd) ... test_autograd.py:1929: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
v_expanded = torch.tensor(value).expand(10)
ok
test_dir (__main__.TestAutograd) ... ok
test_dont_materialize_grads (__main__.TestAutograd) ... ok
test_duplicate_backward_root (__main__.TestAutograd) ... ok
test_eig (__main__.TestAutograd) ... ok
test_eig_complex_eigenvalues (__main__.TestAutograd) ... ok
test_eig_no_eigenvectors (__main__.TestAutograd) ... ok
test_fill (__main__.TestAutograd) ... ok
test_free_deep_graph (__main__.TestAutograd) ... ok
test_free_deep_graph_complicated (__main__.TestAutograd) ... ok
test_free_deep_graph_pyfunction (__main__.TestAutograd) ... ok
test_function (__main__.TestAutograd) ... ok
test_function_returns_input (__main__.TestAutograd) ... ok
test_function_returns_undefined_tensor (__main__.TestAutograd) ... ok
test_gc_in_destructor (__main__.TestAutograd) ... ok
test_grad (__main__.TestAutograd) ... ok
test_grad_badcalls (__main__.TestAutograd) ... ok
test_grad_fn_badcalls (__main__.TestAutograd) ... ok
test_grad_mode_restored_reentrant (__main__.TestAutograd) ... ok
test_grad_nonleaf (__main__.TestAutograd) ... test_autograd.py:461: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
self.assertIsNone(x.grad)
ok
test_grad_nonleaf_many_outputs (__main__.TestAutograd) ... ok
test_grad_nonleaf_register_hook (__main__.TestAutograd) ... test_autograd.py:513: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
self.assertIsNone(x_list[0].grad)
test_autograd.py:520: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
self.assertIsNone(x_list[i].grad)
ok
test_grad_unreachable (__main__.TestAutograd) ... ok
test_gradcheck_fail_when_no_differentiable_outputs_and_num_grad_not_zero (__main__.TestAutograd) ... ok
test_gradcheck_nondeterministic (__main__.TestAutograd) ... ok
test_gradcheck_single_input (__main__.TestAutograd) ... ok
test_gradcheck_sparse_input (__main__.TestAutograd) ... ok
test_hessian_vector (__main__.TestAutograd) ... ok
test_hook_none (__main__.TestAutograd) ... ok
test_hook_with_no_name (__main__.TestAutograd) ... ok
test_hooks (__main__.TestAutograd) ... ok
test_hooks_cpp (__main__.TestAutograd) ... ok
test_igamma (__main__.TestAutograd) ... ok
test_igammac (__main__.TestAutograd) ... ok
test_index_backward_does_not_save_tensor (__main__.TestAutograd) ... ok
test_indexing (__main__.TestAutograd) ... ok
test_indexing_duplicates (__main__.TestAutograd) ... ok
test_inplace (__main__.TestAutograd) ... ok
test_inplace_not_requires_grad (__main__.TestAutograd) ... ok
test_inplace_view_backward (__main__.TestAutograd) ... ok
test_inplace_view_leaf_errors (__main__.TestAutograd) ... ok
test_inplace_view_saved_output (__main__.TestAutograd) ... ok
test_inplace_view_weak_grad_fn (__main__.TestAutograd) ... ok
test_integer_outputs (__main__.TestAutograd) ... ok
test_invalid_gradients (__main__.TestAutograd) ... ok
test_isolated_node (__main__.TestAutograd) ... ok
test_leaf_assignment (__main__.TestAutograd) ... ok
test_legacy_function_deprecation_exception (__main__.TestAutograd) ... ok
test_lerp_tensor_weights (__main__.TestAutograd) ... ok
test_lobpcg (__main__.TestAutograd) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_mark_non_differentiable (__main__.TestAutograd) ... ok
test_mark_non_differentiable_mixed (__main__.TestAutograd) ... ok
test_mark_non_differentiable_none (__main__.TestAutograd) ... ok
test_materialize_grads (__main__.TestAutograd) ... ok
test_mul_out (__main__.TestAutograd) ... ok
test_mul_out_result_requires_grad (__main__.TestAutograd) ... ok
test_multi_backward (__main__.TestAutograd) ... ok
test_multi_backward_no_grad (__main__.TestAutograd) ... ok
test_named_tensor_for_complex_views (__main__.TestAutograd) ... /usr/local/lib/python3.6/dist-packages/torch/tensor.py:758: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:934.)
return super(Tensor, self).refine_names(names)
ok
test_nan_to_num (__main__.TestAutograd) ... ok
test_nansum_dtype (__main__.TestAutograd) ... ok
test_nansum_with_nans (__main__.TestAutograd) ... ok
test_naughty_anomaly_access (__main__.TestAutograd) ... expected failure
test_naughty_autograd_function_stashing_ctx (__main__.TestAutograd) ... ok
test_naughty_legacy_function_backward_before_forward (__main__.TestAutograd) ... ok
test_naughty_legacy_function_early_access (__main__.TestAutograd) ... ok
test_naughty_legacy_variable_grad_fn (__main__.TestAutograd) ... ok
test_nested_anomaly_detect_nan (__main__.TestAutograd) ... ok
test_nested_anomaly_printstack_cleanup (__main__.TestAutograd) ... ok
test_next_functions (__main__.TestAutograd) ... ok
test_no_grad (__main__.TestAutograd) ... ok
test_no_grad_assignment (__main__.TestAutograd) ... ok
test_no_grad_copy (__main__.TestAutograd) ... ok
test_no_grad_copy_sparse (__main__.TestAutograd) ... /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2025: UserWarning: Argument order of nn.functional.embedding_bag was changed. Usage `embedding_bag(weight, input, ...)` is deprecated, and should now be `embedding_bag(input, weight, ...)`.
"Argument order of nn.functional.embedding_bag was changed. "
ok
test_no_grad_input (__main__.TestAutograd) ... ok
test_no_grad_modifies_version (__main__.TestAutograd) ... ok
test_no_grad_python_function (__main__.TestAutograd)
Python Functions should respect grad mode. ... ok
test_no_requires_grad_inplace (__main__.TestAutograd) ... ok
test_no_unnecessary_save (__main__.TestAutograd) ... ok
test_norm_inf_subgradient (__main__.TestAutograd) ... test_autograd.py:2902: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
x = torch.tensor(input, requires_grad=True)
ok
test_norm_subgradient (__main__.TestAutograd) ... ok
test_numpy_requires_grad (__main__.TestAutograd) ... ok
test_once_differentiable (__main__.TestAutograd) ... ok
test_pickle (__main__.TestAutograd) ... ok
test_pow_scalar_base (__main__.TestAutograd) ... ok
test_pow_zero_tensor_gradient (__main__.TestAutograd) ... ok
test_power_function (__main__.TestAutograd) ... ok
test_profiler (__main__.TestAutograd) ... ok
test_profiler_aggregation_fake (__main__.TestAutograd) ... ok
test_profiler_aggregation_lstm (__main__.TestAutograd) ... Fail to import hypothesis in common_utils, tests are not derandomized
================================================================================================================================================================
TEST
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
aten::tanh 1.75% 427.087us 1.75% 427.087us 427.087us 1 [[3, 20], [3, 20]]
aten::slice 0.86% 209.426us 0.86% 209.836us 209.836us 1 [[3, 80], [], [], [], []]
aten::_unsafe_view 0.47% 115.528us 0.48% 116.279us 116.279us 1 [[15, 80], []]
aten::lstm 0.45% 110.346us 4.96% 1.208ms 1.208ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.45% 109.741us 4.90% 1.193ms 1.193ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.43% 103.759us 6.49% 1.580ms 1.580ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.43% 103.710us 4.77% 1.160ms 1.160ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.696us 4.65% 1.132ms 1.132ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.505us 4.69% 1.143ms 1.143ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.466us 4.72% 1.149ms 1.149ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Self CPU time total: 24.351ms
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
aten::mul 12.78% 3.111ms 14.34% 3.493ms 5.821us 600 [[3, 20], [3, 20]]
aten::lstm 8.44% 2.056ms 97.36% 23.708ms 1.185ms 20 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::sigmoid_ 8.42% 2.050ms 11.60% 2.824ms 4.706us 600 [[3, 20]]
aten::addmm 7.80% 1.899ms 11.35% 2.764ms 13.822us 200 [[80], [3, 20], [20, 80], [], []]
aten::slice 4.92% 1.197ms 6.35% 1.547ms 1.934us 800 [[3, 80], [], [], [], []]
aten::unsafe_split 3.97% 967.593us 13.09% 3.188ms 15.940us 200 [[3, 80], [], []]
aten::tanh 3.87% 943.549us 4.35% 1.058ms 5.292us 200 [[3, 20]]
aten::empty 3.31% 806.970us 3.31% 806.970us 0.684us 1180 [[], [], [], [], [], []]
aten::tanh 3.30% 803.567us 3.30% 803.567us 4.018us 200 [[3, 20], [3, 20]]
aten::sigmoid 3.18% 773.542us 3.18% 773.542us 1.289us 600 [[3, 20], [3, 20]]
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Self CPU time total: 24.351ms
================================================================================================================================================================
TEST
================================================================================================================================================================
This report only display top-level ops statistics
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
aten::lstm 0.45% 110.346us 4.96% 1.208ms 1.208ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.45% 109.741us 4.90% 1.193ms 1.193ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.43% 103.759us 6.49% 1.580ms 1.580ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.43% 103.710us 4.77% 1.160ms 1.160ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.696us 4.65% 1.132ms 1.132ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.505us 4.69% 1.143ms 1.143ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.466us 4.72% 1.149ms 1.149ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.336us 4.72% 1.150ms 1.150ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.181us 4.70% 1.143ms 1.143ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::lstm 0.42% 102.103us 4.69% 1.142ms 1.142ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []]
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Self CPU time total: 24.351ms
================================================================================================================================================================
This report only display top-level ops statistics
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
aten::lstm 8.44% 2.056ms 97.36% 23.708ms 1.185ms 20 [[5, 3, 10], [], [], [], [], [], [], [], []]
aten::randn 0.52% 126.644us 2.64% 643.161us 10.719us 60 [[], [], [], [], []]
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------------
Self CPU time total: 24.351ms
ok
test_profiler_aggregation_table (__main__.TestAutograd) ... ok
test_profiler_function_event_avg (__main__.TestAutograd) ... ok
test_profiler_no_cuda (__main__.TestAutograd) ... ok
test_profiler_propagation (__main__.TestAutograd) ... ok
test_profiler_seq_nr (__main__.TestAutograd) ... ok
test_profiler_shapes (__main__.TestAutograd) ... ok
test_profiler_tracing (__main__.TestAutograd) ... ok
test_profiler_unboxed_only (__main__.TestAutograd) ... ok
test_put (__main__.TestAutograd) ... ok
test_put_accumulate (__main__.TestAutograd) ... ok
test_record_function (__main__.TestAutograd) ... ok
test_record_function_callbacks (__main__.TestAutograd) ... ok
test_record_function_multithreaded (__main__.TestAutograd) ... ok
test_reduce_dtype (__main__.TestAutograd) ... ok
test_reentrant_child_error (__main__.TestAutograd) ... ok
test_reentrant_priority (__main__.TestAutograd) ... ok
test_reentrant_with_callbacks_both_depths (__main__.TestAutograd) ... ok
test_reentrant_with_callbacks_depth_0 (__main__.TestAutograd) ... ok
test_reentrant_with_callbacks_depth_1 (__main__.TestAutograd) ... ok
test_reentrant_with_leaf_variable_hook (__main__.TestAutograd) ... ok
test_reentrant_with_non_leaf_variable_hook (__main__.TestAutograd) ... ok
test_requires_grad (__main__.TestAutograd) ... ok
test_requires_grad_ (__main__.TestAutograd) ... ok
test_requires_grad_inplace (__main__.TestAutograd) ... ok
test_resize (__main__.TestAutograd) ... ok
test_retain_grad (__main__.TestAutograd) ... ok
test_retain_grad_cycle (__main__.TestAutograd) ... ok
test_return_duplicate (__main__.TestAutograd) ... ok
test_return_duplicate_inplace (__main__.TestAutograd) ... ok
test_return_leaf (__main__.TestAutograd) ... ok
test_return_leaf_inplace (__main__.TestAutograd) ... ok
test_save_none_for_backward (__main__.TestAutograd) ... ok
test_save_output_nr (__main__.TestAutograd) ... ok
test_saved_variables_deprecated (__main__.TestAutograd) ... ok
test_select_expanded_v (__main__.TestAutograd) ... ok
test_select_sum (__main__.TestAutograd) ... ok
test_set_data_preserve_pyobj (__main__.TestAutograd) ... ok
test_set_data_tensorimpl_type (__main__.TestAutograd) ... ok
test_set_grad_coroutines (__main__.TestAutograd) ... ok
test_set_grad_coroutines_benign_exceptions (__main__.TestAutograd) ... ok
test_set_grad_coroutines_critical_exceptions (__main__.TestAutograd) ... ok
test_set_grad_coroutines_exit (__main__.TestAutograd) ... ok
test_set_grad_enabled (__main__.TestAutograd) ... ok
test_set_grad_generator_functions (__main__.TestAutograd) ... ok
test_set_grad_generator_functions_recursive (__main__.TestAutograd) ... ok
test_setitem (__main__.TestAutograd) ... ok
test_setitem_mask (__main__.TestAutograd) ... ok
test_shape (__main__.TestAutograd) ... ok
test_sharded_grad (__main__.TestAutograd) ... ok
test_simple_reentrant (__main__.TestAutograd) ... ok
test_slice_expanded_v (__main__.TestAutograd) ... ok
test_slogdet_sign (__main__.TestAutograd) ... ok
test_sparse_backward (__main__.TestAutograd) ... ok
test_sparse_gather_both_scalar (__main__.TestAutograd) ... ok
test_sparse_gather_dim0 (__main__.TestAutograd) ... ok
test_sparse_gather_dim1 (__main__.TestAutograd) ... ok
test_sparse_gather_dim_neg (__main__.TestAutograd) ... ok
test_sparse_gather_ind_scalar (__main__.TestAutograd) ... ok
test_sparse_gather_x_scalar (__main__.TestAutograd) ... ok
test_sparse_mm_backward (__main__.TestAutograd) ... ok
test_sum_to_with_empty_dim_grad (__main__.TestAutograd) ... ok
test_svd_no_singularvectors (__main__.TestAutograd) ... ok
test_symeig (__main__.TestAutograd) ... ok
test_symeig_no_eigenvectors (__main__.TestAutograd) ... ok
test_tensor_grad_warnings (__main__.TestAutograd) ... ok
test_thread_shutdown (__main__.TestAutograd) ... ok
test_too_many_grads (__main__.TestAutograd) ... ok
test_trapz (__main__.TestAutograd) ... ok
test_type_conversions (__main__.TestAutograd) ... ok
test_unbind (__main__.TestAutograd) ... ok
test_unused_output (__main__.TestAutograd) ... ok
test_var_mean_differentiable (__main__.TestAutograd) ... ok
test_variable_traverse (__main__.TestAutograd) ... ok
test_version_counter (__main__.TestAutograd) ... ok
test_volatile_deprecated (__main__.TestAutograd) ... ok
test_view_func_for_complex_views (__main__.TestAutogradComplex) ... ok
test_view_with_multi_output (__main__.TestAutogradComplex) ... ok
test_GRU_grad_and_gradgrad_cpu (__main__.TestAutogradDeviceTypeCPU) ... [W Module.cpp:482] Warning: Disabling benchmark mode for MIOpen is NOT supported. Overriding value to True (function operator())
ok
test_LSTM_grad_and_gradgrad_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_beg_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_comb_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_dup_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_end_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_mid_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_sub_2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_sub_3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_sub_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___adv_index_var_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___slice_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___getitem___slice_index_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___radd___constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___radd___constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___radd___scalar_constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___radd___scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rdiv___complex_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rdiv___complex_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rdiv___constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rdiv___scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rmul___constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rmul___constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rmul___scalar_constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rmul___scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rpow___constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rpow___scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rsub___constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test___rsub___scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_complex_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_add_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_scalar_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_scalar_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addbmm_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_broadcast_all_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scalar_scale_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_broadcast_all_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcdiv_scale_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_broadcast_all_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scalar_scale_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_broadcast_all_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addcmul_scale_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_scalar_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmm_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_scalar_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_scalar_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_addmv_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_advanced_indexing_backwards_large_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_advanced_indexing_backwards_memory_format_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_multiple_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amax_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_multiple_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_amin_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atan2_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atan2_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atan2_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atan2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atan2_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_atleast_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_backward_device_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'fewer than 2 devices detected'
test_baddbmm_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_scalar_broadcast_lhs_coef_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_scalar_broadcast_lhs_coef_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_baddbmm_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_bmm_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_bmm_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cdist_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cdist_grad_p_lt_1_no_nan_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cdist_same_inputs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_chunk_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_max_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_max_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_max_scalar_kwarg_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_min_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_min_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clamp_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clone_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clone_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clone_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_clone_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_conj_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_conj_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_contiguous_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_contiguous_not_contiguous_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copy__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_scalar_pos_zero_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_copysign_subgradient_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cross_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cross_device_reentrant_autograd_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_cross_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ctc_loss_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ctc_loss_cudnn_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_cummax_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummax_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummax_dim0_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummax_dim0_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummax_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummax_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim0_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim0_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cummin_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_scalar_zeros_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim0_cast_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim0_cast_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumprod_zeros_dim2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim0_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim0_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim1_cast_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim1_cast_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_cumsum_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_deg2rad_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_1x1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_1x1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_distinct_singular_values_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_symmetric_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_symmetric_pd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_batched_symmetric_psd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_dim2_null_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_distinct_singular_values_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_rank1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_rank2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_symmetric_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_symmetric_pd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_det_symmetric_psd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diag_embed_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_2_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_neg_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_neg_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_pos_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_tall_pos_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_neg_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_neg_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_pos_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_2d_wide_pos_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_2_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_3_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_diagonal_3d_3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_4_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_4_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_4_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_4_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_4_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dist_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_complex_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_div_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dot_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_dot_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_elu_inplace_with_neg_alpha_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__pyscalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__pyscalar_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__scalar_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_eq__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_1_element_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_1_element_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_as_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_new_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_new_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_new_dim_front_old_front_1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_new_dim_front_old_front_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_scalar_to_dims_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_scalar_to_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_scalar_to_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_scalar_to_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_size_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_expand_size_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__number_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__number_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__number_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__number_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__variable_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fill__variable_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_float_power_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmax_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmin_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_scalar_tensor_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_scalar_tensor_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_scalar_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_tensor_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_tensor_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_tensor_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_fmod_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_frac_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_frac_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_free_unneeded_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_ge__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ge__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ge__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ge__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ge__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ge__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ger_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ger_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_grad_assignment_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_gt__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_hypot_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_imag_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_alert_nondeterministic_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_alert_nondeterministic_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_scalar_all_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_scalar_all_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_scalar_input_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_add_scalar_input_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_dim_alert_nondeterministic_cpu (__main__.TestAutogradDeviceTypeCPU) ... [W Context.cpp:70] Warning: torch.use_deterministic_algorithms is in beta, and its design and functionality may change in the future. (function operator())
ok
test_index_copy_dim_alert_nondeterministic_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_scalar_all_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_scalar_all_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_scalar_input_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_copy_scalar_input_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_both_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_both_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_index_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_index_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_input_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_scalar_input_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_variable_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_index_fill_variable_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inner_1d_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inner_scalar_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_multiple_output_view_of_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_backprop_base_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_backprop_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... test_autograd.py:7408: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
self.assertIsNone(a.grad)
ok
test_inplace_view_backprop_view_of_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_gradcheck_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_makes_base_require_grad_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_modify_base_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_multi_output_safe_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_multi_output_unsafe_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_multiple_outputs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_non_contig_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_of_multiple_output_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_of_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_python_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inplace_view_then_no_grad_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inputbuffer_add_multidevice_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'fewer than 2 devices detected'
test_inverse_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inverse_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inverse_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_inverse_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kron_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_1d_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_alert_nondeterministic_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_alert_nondeterministic_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_keepdim_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_keepdim_dim_1d_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_kthvalue_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_le__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_leaky_relu_inplace_with_neg_slope_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_leaky_relu_inplace_with_zero_slope_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lerp_scalar_no_broadcast_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_log_softmax_kwarg_dtype_would_break_jit_loader_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logaddexp2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logaddexp_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim0_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim0_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logcumsumexp_large_value_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_1x1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_batched_1x1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_batched_distinct_singular_values_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_batched_symmetric_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_batched_symmetric_pd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_distinct_singular_values_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_symmetric_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logdet_symmetric_pd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logsumexp_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_logsumexp_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lstmcell_backward_only_one_output_grad_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_lt__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lt__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lt__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lt__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lt__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lt__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_backward_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_lu_square_batch_no_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_square_batch_with_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_square_many_batches_no_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_square_many_batches_with_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_square_single_no_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_lu_square_single_with_info_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_scalar_variable_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_fill_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_masked_scatter_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_3d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_3d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_4d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_1d_4d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_3d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_2d_3d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_3d_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_3d_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_3d_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_3d_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_4d_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_4d_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_4d_4d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_4d_4d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matmul_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_exp_batch_of_matrices_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_exp_batch_of_matrices_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_exp_single_matrix_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_exp_single_matrix_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=-1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=-2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=-3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_matrix_power_n=3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_elementwise_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_elementwise_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_elementwise_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_elementwise_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_elementwise_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_elementwise_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_elementwise_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_max_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_maximum_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_dtype_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... test_autograd.py:5066: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at /pytorch/aten/src/ATen/native/Copy.cpp:219.)
output_variable = getattr(self_variable, name)(*args_variable, **kwargs_variable)
ok
test_mean_dtype_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_keepdim_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_keepdim_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_keepdim_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_keepdim_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mean_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_dim_alert_nondeterministic_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_dim_alert_nondeterministic_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_median_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_elementwise_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_elementwise_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_elementwise_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_elementwise_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_max_median_backprops_to_all_values_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_elementwise_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_elementwise_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_elementwise_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_min_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_minimum_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mm_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mode_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_msort_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_all_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_broadcast_lhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_constant_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mul_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mv_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mv_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mv_grad_stride_0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mvlgamma_p=1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mvlgamma_p=2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mvlgamma_p=3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_mvlgamma_p=5_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanmedian_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_keepdim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nanquantile_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_multi_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_multi_dim_keepdim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_nansum_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_empty_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_empty_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_empty_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_narrow_empty_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__pyscalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__pyscalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__pyscalar_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__pyscalar_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__scalar_broadcast_rhs_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ne__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_-inf_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_0_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_0_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_0_5_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_5_default_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_5_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_5_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_2_dim_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_3_dim_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_default_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_fro_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_fro_default_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_inf_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_inf_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_1_5_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_1_5_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_2_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_2_dim_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_3_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_3_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_3_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_keepdim_3_dim_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_0_5_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_1_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_1_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_1_5_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_2_2_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_2_2_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_neg_2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_nuc_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_norm_nuc_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_parameter_resize_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pdist_large_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_neg_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_neg_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_permute_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pin_memory_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_pow_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_complex_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_complex_imaginary_exponent_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_complex_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_pow_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_keepdim_zeros_dims2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_dim_zero_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_dim_zero_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_keepdim_dim_zero_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_keepdim_dim_zero_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_scalar_zero_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_single_zero_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zerodims0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zerodims1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zerodims2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_prod_zeros_dims2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_profiler_emit_nvtx_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_pyscalar_conversions_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_many_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_many_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_single_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_square_single_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_many_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_many_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_single_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_tall_single_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_many_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_many_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_single_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_qr_wide_single_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_keepdim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_quantile_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rad2deg_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_ravel_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_real_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reentrant_parent_error_on_cpu_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_remainder_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_scalar_tensor_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_scalar_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_tensor_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_tensor_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_remainder_tensor_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_renorm_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_renorm_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_renorm_norm_1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_renorm_norm_inf_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_requires_grad_factory_cpu_float32 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_requires_grad_factory_cpu_float64 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_scalar_to_dims_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_as_scalar_to_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_scalar_to_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_scalar_to_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_scalar_to_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_scalar_to_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_size_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_reshape_size_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize__fewer_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize__scalar_to_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize_as__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize_as__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_resize_as__scalar_to_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rnn_backward_to_input_but_not_parameters_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_roll_d02_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d02_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d12_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d12_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d20_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_d20_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_flattened_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_flattened_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_loop_shift_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_loop_shift_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_neg_shift_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_neg_shift_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_three_dims_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_roll_three_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_default_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_default_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_d01_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_d01_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_d12_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_d12_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_neg_d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rot90_k1_neg_d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rsqrt_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rsqrt_complex_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rsqrt_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_rsqrt_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_alert_nondeterministic_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_alert_nondeterministic_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_scalar_all_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_add_scalar_all_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_dim1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_dim1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_scalar_all_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_scalar_all_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_scalartensor_all_dim0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_scatter_scalartensor_all_dim0_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_select_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_select_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_select_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_select_wrap_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_select_wrap_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_float32 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_float64 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_int16 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_int32 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_int64 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_set_requires_grad_only_for_floats_cpu_int8 (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sgn_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sgn_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sgn_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sgn_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sign_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sign_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_simple_reentrant_cross_device_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_solve_batched_broadcast_A_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_broadcast_A_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_broadcast_b_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_broadcast_b_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_dims_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_batched_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_solve_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_dim_desc_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_dim_desc_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sort_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sparse_ctor_getter_backward_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sparse_mask_autograd_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_size_list_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_size_0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_split_with_sizes_size_0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_1_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_1_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_1_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_1_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_input_sizes_are_ones_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_input_sizes_are_ones_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_not_1_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_not_1_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_not_1_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_not_1_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_scalar_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_squeeze_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_std_mean_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_std_mean_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_std_mean_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_std_mean_keepdim_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_std_mean_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_strided_leaf_grad_layout_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_complex_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sub_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_keepdim_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_keepdim_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_multi_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_multi_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_multi_dim_keepdim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_multi_dim_keepdim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_keepdim_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_keepdim_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_sum_scalar_keepdim_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_3d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_3d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg0_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg0_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_dim_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapaxes_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_3d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_3d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg0_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg0_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_dim_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_swapdims_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_t_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_t_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_take_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_take_scalar_both_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_take_scalar_data_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_take_scalar_index_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_indices_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tensor_split_sections_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_to_sparse_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_sort_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_sort_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_sort_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_desc_sort_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_dim_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_topk_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_2d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_2d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_3d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_3d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg0_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg0_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg1_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_dim_neg1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_transpose_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_batched_idx_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_batched_idx_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_idx_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_idx_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_more_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_tril_more_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_batched_idx_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_batched_idx_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_idx_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_idx_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_more_batched_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_triu_more_batched_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_scalar_broadcast_lhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_scalar_broadcast_rhs_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_scalar_constant_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_true_divide_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_trunc_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_trunc_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step3_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step3_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step_gt_size2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step_gt_size2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step_gt_size_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_1d_step_gt_size_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_ge_size2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_ge_size2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_gt_size2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_gt_size2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_gt_size_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_2d_step_gt_size_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_size4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_size4_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_step1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_step1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_step2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim0_step2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_size4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_size4_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_step1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_step1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_step2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim1_step2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_size4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_size4_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_step1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_step1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_step2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim2_step2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_size4_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_size4_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_step1_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_step1_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_step2_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_4d_dim3_step2_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_lastdim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_lastdim_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unfold_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_first_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_first_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_first_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_first_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_last_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_last_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_last_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_last_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_middle_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_middle_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_middle_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_middle_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_scalar_neg0_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unsqueeze_scalar_neg0_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_unused_output_device_cpu (__main__.TestAutogradDeviceTypeCPU) ... skipped 'fewer than 2 devices detected'
test_var_mean_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_var_mean_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_var_mean_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_var_mean_keepdim_dim_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_var_mean_keepdim_dim_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_vdot_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_vdot_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_real_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_scalar_to_dims_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_as_scalar_to_dims_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_scalar_to_1d_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_scalar_to_1d_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_scalar_to_scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_scalar_to_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_size_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_view_size_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_broadcast_all_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_functional_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_scalar_broadcast_mask_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_scalar_broadcast_non_mask_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_where_scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_xlogy_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_zero__complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_zero__cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_zero__scalar_complex_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_zero__scalar_cpu (__main__.TestAutogradDeviceTypeCPU) ... ok
test_GRU_grad_and_gradgrad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_LSTM_grad_and_gradgrad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_beg_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_comb_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_dup_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_end_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_mid_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_sub_2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_sub_3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_sub_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___adv_index_var_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___slice_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___getitem___slice_index_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___radd___constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___radd___constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___radd___scalar_constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___radd___scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rdiv___complex_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rdiv___complex_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rdiv___constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rdiv___scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rmul___constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rmul___constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rmul___scalar_constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rmul___scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rpow___constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rpow___scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rsub___constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test___rsub___scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_complex_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_add_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_scalar_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_scalar_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addbmm_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_broadcast_all_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scalar_scale_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_broadcast_all_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcdiv_scale_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_broadcast_all_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scalar_scale_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_broadcast_all_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addcmul_scale_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_scalar_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmm_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_scalar_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_scalar_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_addmv_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_advanced_indexing_backwards_large_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_advanced_indexing_backwards_memory_format_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_multiple_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amax_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_multiple_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_amin_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atan2_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atan2_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atan2_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atan2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atan2_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_atleast_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_backward_device_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'fewer than 2 devices detected'
test_baddbmm_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_scalar_broadcast_lhs_coef_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_scalar_broadcast_lhs_coef_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_baddbmm_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_bmm_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_bmm_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cdist_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cdist_grad_p_lt_1_no_nan_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cdist_same_inputs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_chunk_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_max_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_max_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_max_scalar_kwarg_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_min_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_min_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clamp_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clone_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clone_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clone_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_clone_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_conj_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_conj_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_contiguous_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_contiguous_not_contiguous_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copy__cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'Only runs on cpu'
test_copysign_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_scalar_pos_zero_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_copysign_subgradient_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cross_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cross_device_reentrant_autograd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cross_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ctc_loss_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'Test is flaky on Linux and Windows, typical error message:\n https://github.com/pytorch/pytorch/issues/34870'
test_ctc_loss_cudnn_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_cummax_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummax_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummax_dim0_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummax_dim0_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummax_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummax_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim0_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim0_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cummin_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_scalar_zeros_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim0_cast_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim0_cast_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumprod_zeros_dim2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim0_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim0_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim1_cast_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim1_cast_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_cumsum_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_deg2rad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_1x1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_batched_1x1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_det_batched_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_det_batched_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_det_batched_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_det_batched_symmetric_psd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_det_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_dim2_null_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_rank1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_rank2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_det_symmetric_psd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diag_embed_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_2_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_neg_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_neg_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_pos_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_tall_pos_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_neg_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_neg_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_pos_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_2d_wide_pos_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_2_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_3_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_diagonal_3d_3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_4_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_4_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_4_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_4_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_4_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dist_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_complex_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_div_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dot_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_dot_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_elu_inplace_with_neg_alpha_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__pyscalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__pyscalar_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__scalar_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_eq__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_1_element_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_1_element_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_as_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_new_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_new_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_new_dim_front_old_front_1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_new_dim_front_old_front_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_scalar_to_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_scalar_to_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_scalar_to_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_scalar_to_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_size_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_expand_size_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__number_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__number_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__number_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__number_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__variable_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fill__variable_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_float_power_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmax_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmin_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_scalar_tensor_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_scalar_tensor_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_scalar_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_tensor_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_tensor_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_tensor_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_fmod_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_frac_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_frac_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_free_unneeded_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ge__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ger_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ger_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_grad_assignment_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_gt__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_hypot_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_imag_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_alert_nondeterministic_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_alert_nondeterministic_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_scalar_all_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_scalar_all_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_scalar_input_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_add_scalar_input_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_dim_alert_nondeterministic_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_dim_alert_nondeterministic_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_scalar_all_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_scalar_all_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_scalar_input_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_copy_scalar_input_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_both_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_both_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_index_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_index_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_input_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_scalar_input_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_variable_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_index_fill_variable_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inner_1d_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inner_scalar_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_multiple_output_view_of_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_backprop_base_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_backprop_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... test_autograd.py:7408: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
self.assertIsNone(a.grad)
ok
test_inplace_view_backprop_view_of_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_gradcheck_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_makes_base_require_grad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_modify_base_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_multi_output_safe_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_multi_output_unsafe_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_multiple_outputs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_non_contig_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_of_multiple_output_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_of_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_python_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inplace_view_then_no_grad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inputbuffer_add_multidevice_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'fewer than 2 devices detected'
test_inverse_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_inverse_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_inverse_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_inverse_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kron_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_1d_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_alert_nondeterministic_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_alert_nondeterministic_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_keepdim_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_keepdim_dim_1d_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_kthvalue_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_le__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_leaky_relu_inplace_with_neg_slope_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_leaky_relu_inplace_with_zero_slope_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lerp_scalar_no_broadcast_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_log_softmax_kwarg_dtype_would_break_jit_loader_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logaddexp2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logaddexp_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim0_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim0_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logcumsumexp_large_value_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_1x1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_batched_1x1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_logdet_batched_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_logdet_batched_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_logdet_batched_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_logdet_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logdet_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logsumexp_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_logsumexp_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lstmcell_backward_only_one_output_grad_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lt__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lu_backward_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_lu_square_batch_no_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_lu_square_batch_with_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_lu_square_many_batches_no_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_lu_square_many_batches_with_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_lu_square_single_no_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_lu_square_single_with_info_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_scalar_variable_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_fill_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_masked_scatter_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_3d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_3d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_4d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_1d_4d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_3d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_2d_3d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_3d_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_3d_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_3d_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_3d_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_4d_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_4d_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_4d_4d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_4d_4d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matmul_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_exp_batch_of_matrices_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_exp_batch_of_matrices_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_exp_single_matrix_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_exp_single_matrix_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=-1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=-2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_matrix_power_n=-3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_matrix_power_n=3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_elementwise_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_elementwise_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_elementwise_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_elementwise_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_elementwise_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_elementwise_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_elementwise_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_max_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_maximum_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dtype_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_dtype_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_keepdim_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_keepdim_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_keepdim_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_keepdim_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mean_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_dim_alert_nondeterministic_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_dim_alert_nondeterministic_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_median_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_elementwise_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_elementwise_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_elementwise_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_elementwise_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_max_median_backprops_to_all_values_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_elementwise_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_elementwise_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_elementwise_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_min_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_minimum_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mm_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mode_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_msort_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_all_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_broadcast_lhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_constant_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mul_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mv_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mv_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mv_grad_stride_0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mvlgamma_p=1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mvlgamma_p=2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mvlgamma_p=3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_mvlgamma_p=5_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanmedian_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_keepdim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nanquantile_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_multi_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_multi_dim_keepdim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_nansum_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_empty_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_empty_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_empty_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_narrow_empty_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__pyscalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__pyscalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__pyscalar_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__pyscalar_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__scalar_broadcast_rhs_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ne__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_-inf_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_0_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_0_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_0_5_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_5_default_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_5_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_5_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_2_dim_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_3_dim_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_default_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_fro_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_fro_default_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_inf_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_inf_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_1_5_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_1_5_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_2_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_2_dim_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_3_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_3_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_3_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_keepdim_3_dim_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_0_5_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_1_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_1_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_1_5_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_2_2_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_2_2_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_neg_2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_nuc_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_norm_nuc_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_parameter_resize_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pdist_large_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_neg_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_neg_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_permute_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pin_memory_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_complex_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_complex_imaginary_exponent_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_complex_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_pow_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_keepdim_zeros_dims2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_dim_zero_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_dim_zero_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_keepdim_dim_zero_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_keepdim_dim_zero_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_scalar_zero_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_single_zero_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zerodims0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zerodims1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zerodims2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_prod_zeros_dims2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_profiler_emit_nvtx_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_pyscalar_conversions_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_square_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_square_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_square_many_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/53184 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_qr_square_many_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_square_single_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_square_single_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_many_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_many_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_single_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_tall_single_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_many_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_many_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_single_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_qr_wide_single_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_keepdim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_quantile_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rad2deg_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_ravel_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_real_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reentrant_parent_error_on_cpu_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_scalar_tensor_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_scalar_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_tensor_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_tensor_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_remainder_tensor_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_renorm_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_renorm_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_renorm_norm_1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_renorm_norm_inf_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_requires_grad_factory_cuda_float32 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_requires_grad_factory_cuda_float64 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_scalar_to_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_as_scalar_to_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_scalar_to_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_scalar_to_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_scalar_to_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_scalar_to_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_size_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_reshape_size_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize__fewer_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize__scalar_to_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize_as__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize_as__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_resize_as__scalar_to_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rnn_backward_to_input_but_not_parameters_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d02_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d02_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d12_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d12_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d20_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_d20_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_flattened_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_flattened_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_loop_shift_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_loop_shift_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_neg_shift_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_neg_shift_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_three_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_roll_three_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_default_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_default_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_d01_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_d01_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_d12_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_d12_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_neg_d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rot90_k1_neg_d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rsqrt_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rsqrt_complex_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rsqrt_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_rsqrt_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_alert_nondeterministic_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_alert_nondeterministic_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_scalar_all_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_add_scalar_all_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_dim1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_dim1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_scalar_all_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_scalar_all_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_scalartensor_all_dim0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_scatter_scalartensor_all_dim0_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_select_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_select_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_select_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_select_wrap_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_select_wrap_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_float16 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_float32 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_float64 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_int16 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_int32 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_int64 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_set_requires_grad_only_for_floats_cuda_int8 (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sgn_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sgn_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sgn_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sgn_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sign_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sign_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_simple_reentrant_cross_device_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_solve_batched_broadcast_A_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_broadcast_A_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_broadcast_b_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_broadcast_b_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_batched_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ERROR
test_solve_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_solve_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_dim_desc_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_dim_desc_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sort_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sparse_ctor_getter_backward_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sparse_mask_autograd_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_size_list_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_size_0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_split_with_sizes_size_0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_1_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_1_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_1_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_1_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_input_sizes_are_ones_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_input_sizes_are_ones_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_not_1_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_not_1_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_not_1_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_not_1_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_scalar_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_squeeze_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_std_mean_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_std_mean_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_std_mean_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_std_mean_keepdim_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_std_mean_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_strided_leaf_grad_layout_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_complex_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sub_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_keepdim_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_keepdim_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_multi_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_multi_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_multi_dim_keepdim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_multi_dim_keepdim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_keepdim_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_keepdim_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_sum_scalar_keepdim_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_3d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_3d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg0_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg0_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_dim_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapaxes_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_3d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_3d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg0_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg0_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_dim_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_swapdims_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_t_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_t_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_take_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_take_scalar_both_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_take_scalar_data_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_take_scalar_index_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_indices_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tensor_split_sections_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_to_sparse_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_sort_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_sort_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_sort_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_desc_sort_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_dim_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_topk_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_2d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_2d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_3d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_3d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg0_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg0_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg1_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_dim_neg1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_transpose_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_batched_idx_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_batched_idx_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_idx_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_idx_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_more_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_tril_more_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_batched_idx_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_batched_idx_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_idx_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_idx_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_more_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_triu_more_batched_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_scalar_broadcast_lhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_scalar_broadcast_rhs_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_scalar_constant_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_true_divide_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_trunc_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_trunc_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step3_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step3_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step_gt_size2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step_gt_size2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step_gt_size_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_1d_step_gt_size_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_ge_size2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_ge_size2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_gt_size2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_gt_size2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_gt_size_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_2d_step_gt_size_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_size4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_size4_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_step1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_step1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_step2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim0_step2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_size4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_size4_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_step1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_step1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_step2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim1_step2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_size4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_size4_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_step1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_step1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_step2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim2_step2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_size4_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_size4_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_step1_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_step1_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_step2_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_4d_dim3_step2_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_lastdim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_lastdim_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unfold_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_first_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_first_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_first_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_first_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_last_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_last_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_last_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_last_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_middle_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_middle_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_middle_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_middle_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_scalar_neg0_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unsqueeze_scalar_neg0_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_unused_output_device_cuda (__main__.TestAutogradDeviceTypeCUDA) ... skipped 'fewer than 2 devices detected'
test_var_mean_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_var_mean_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_var_mean_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_var_mean_keepdim_dim_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_var_mean_keepdim_dim_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_vdot_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_vdot_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_real_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_scalar_to_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_as_scalar_to_dims_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_scalar_to_1d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_scalar_to_1d_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_scalar_to_scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_scalar_to_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_size_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_view_size_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_broadcast_all_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_functional_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_scalar_broadcast_mask_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_scalar_broadcast_non_mask_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_where_scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_xlogy_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_zero__complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_zero__cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_zero__scalar_complex_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_zero__scalar_cuda (__main__.TestAutogradDeviceTypeCUDA) ... ok
test_forward_level_cleanup (__main__.TestAutogradForwardMode) ... ok
test_construct_standard_basis_for (__main__.TestAutogradFunctional) ... ok
test_construct_standard_basis_for_cuda (__main__.TestAutogradFunctional) ... ok
test_hessian_create_graph (__main__.TestAutogradFunctional) ... ok
test_hessian_create_graph_vectorize (__main__.TestAutogradFunctional) ... ok
test_hessian_err_check (__main__.TestAutogradFunctional) ... ok
test_hessian_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_hessian_err_check_strict_vectorize (__main__.TestAutogradFunctional) ... ok
test_hessian_err_check_vectorize (__main__.TestAutogradFunctional) ... ok
test_hessian_match_vhp_hvp (__main__.TestAutogradFunctional) ... ok
test_hessian_output (__main__.TestAutogradFunctional) ... ok
test_hessian_output_vectorize (__main__.TestAutogradFunctional) ... ok
test_hessian_scalar (__main__.TestAutogradFunctional) ... ok
test_hessian_scalar_vectorize (__main__.TestAutogradFunctional) ... ok
test_hessian_vectorize_correctness_multi_input (__main__.TestAutogradFunctional) ... ok
test_hessian_vectorize_correctness_simple (__main__.TestAutogradFunctional) ... ok
test_hessian_vectorize_correctness_unrelated_outputs (__main__.TestAutogradFunctional) ... ok
test_hessian_vectorize_raises_no_warnings (__main__.TestAutogradFunctional) ... ok
test_hvp_create_graph (__main__.TestAutogradFunctional) ... ok
test_hvp_err_check (__main__.TestAutogradFunctional) ... ok
test_hvp_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_hvp_output (__main__.TestAutogradFunctional) ... ok
test_hvp_scalar (__main__.TestAutogradFunctional) ... ok
test_jacobian_create_graph (__main__.TestAutogradFunctional) ... ok
test_jacobian_create_graph_vectorize (__main__.TestAutogradFunctional) ... ok
test_jacobian_err_check (__main__.TestAutogradFunctional) ... ok
test_jacobian_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_jacobian_err_check_strict_vectorize (__main__.TestAutogradFunctional) ... ok
test_jacobian_err_check_vectorize (__main__.TestAutogradFunctional) ... ok
test_jacobian_match_vjp_jvp (__main__.TestAutogradFunctional) ... ok
test_jacobian_output (__main__.TestAutogradFunctional) ... ok
test_jacobian_output_vectorize (__main__.TestAutogradFunctional) ... ok
test_jacobian_scalar (__main__.TestAutogradFunctional) ... ok
test_jacobian_scalar_vectorize (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_different_devices (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_different_dtype (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_multi_input (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_multi_input_multi_output (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_simple (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_unrelated_outputs (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_correctness_zero_dim (__main__.TestAutogradFunctional) ... ok
test_jacobian_vectorize_raises_no_warnings (__main__.TestAutogradFunctional) ... ok
test_jvp_create_graph (__main__.TestAutogradFunctional) ... ok
test_jvp_err_check (__main__.TestAutogradFunctional) ... ok
test_jvp_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_jvp_output (__main__.TestAutogradFunctional) ... ok
test_jvp_scalar (__main__.TestAutogradFunctional) ... ok
test_vhp_create_graph (__main__.TestAutogradFunctional) ... ok
test_vhp_err_check (__main__.TestAutogradFunctional) ... ok
test_vhp_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_vhp_output (__main__.TestAutogradFunctional) ... ok
test_vhp_scalar (__main__.TestAutogradFunctional) ... ok
test_vjp_create_graph (__main__.TestAutogradFunctional) ... ok
test_vjp_err_check (__main__.TestAutogradFunctional) ... ok
test_vjp_err_check_strict (__main__.TestAutogradFunctional) ... ok
test_vjp_output (__main__.TestAutogradFunctional) ... ok
test_vjp_scalar (__main__.TestAutogradFunctional) ... ok
test_cat_r_to_c (__main__.TestMultithreadAutograd) ... ok
test_fork_join_in_middle (__main__.TestMultithreadAutograd) ... ok
test_preserve_backtrace (__main__.TestMultithreadAutograd) ... ok
test_python_thread_in_middle (__main__.TestMultithreadAutograd) ... ok
test_simple_backward (__main__.TestMultithreadAutograd) ... ok
test_simple_backward_same_input (__main__.TestMultithreadAutograd) ... ok
======================================================================
ERROR: test_det_batched_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5098, in check
run_gradcheck, f_args_variable, f_args_tensor)
File "test_autograd.py", line 4970, in run_functional_checks
output_variable, f_args_variable)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 13.6702],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_det_batched_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5098, in check
run_gradcheck, f_args_variable, f_args_tensor)
File "test_autograd.py", line 4970, in run_functional_checks
output_variable, f_args_variable)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0053, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, -0.0053],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_det_batched_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5098, in check
run_gradcheck, f_args_variable, f_args_tensor)
File "test_autograd.py", line 4970, in run_functional_checks
output_variable, f_args_variable)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-0.2434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.3661, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.2025, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-0.2434, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.3661, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.2540, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.4036, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.2540, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.1554, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -4.5085, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.6645],
[ 0.0000, 0.0000, -0.1119],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.0437],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0839],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0615],
[ 0.0000, 0.0000, 0.0000]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_det_batched_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5098, in check
run_gradcheck, f_args_variable, f_args_tensor)
File "test_autograd.py", line 4970, in run_functional_checks
output_variable, f_args_variable)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[-19.1029, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 14.4764, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ -8.8927, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 14.4764, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-11.2401, 0.0000, 0.0000],
[ 0.0000, 233.4561, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 88.9630, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 191.0800, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 83.8901, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 198.9093, 0.0000],
[ 0.0000, 0.0000, 22.3160],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 73.1214],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -38.7254],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.1171],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -15.6098],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_det_batched_symmetric_psd_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5098, in check
run_gradcheck, f_args_variable, f_args_tensor)
File "test_autograd.py", line 4970, in run_functional_checks
output_variable, f_args_variable)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[-19.1028, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 14.4762, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ -8.8926, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 14.4762, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-11.2400, 0.0000, 0.0000],
[ 0.0000, 233.4535, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 88.9618, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 191.0777, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 83.8889, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 198.9069, 0.0000],
[ 0.0000, 0.0000, 22.3159],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 73.1209],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -38.7251],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 21.1170],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -15.6097],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_inverse_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 461, in gradcheck
"Gradients failed to compare equal for grad output = 1j. ")
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Gradients failed to compare equal for grad output = 1j. Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
analytical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
======================================================================
ERROR: test_inverse_batched_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_logdet_batched_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., nan, ..., nan, nan, nan],
[0., 0., nan, ..., nan, nan, nan],
[0., 0., nan, ..., nan, nan, nan],
...,
[0., 0., nan, ..., nan, nan, nan],
[0., 0., nan, ..., nan, nan, nan],
[0., 0., nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_logdet_batched_distinct_singular_values_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.0000, 0.0000, 0.0000],
[ 2.7204, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-2.1071, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-5.6833, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[-5.2527, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 3.0089, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -2.9930, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 3.2663, 0.0000],
[ 0.0000, 1.7986, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 4.2149, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, -2.0348, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 11.9362],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 2.0813],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 3.1800],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -2.3453],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -4.2401],
[ 0.0000, 0.0000, 0.0000]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_logdet_batched_symmetric_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[-1.2152, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[-1.8285, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[-1.0114, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 1.2152, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[-1.8285, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, -0.6911, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 1.0981, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.6911, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.4229, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 12.2664, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan],
[ 0.0000, 0.0000, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_logdet_batched_symmetric_pd_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.4600, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.1754, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.3767, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.1654, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.3920, 0.0000],
[ nan, 0.0000, 0.3300],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 1.0670],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, -0.5834],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.3122],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, -0.2314],
[ nan, 0.0000, 0.0000],
[ nan, 0.0000, 0.0000]], device='cuda:0')
analytical:tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_lu_square_batch_no_info_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
...,
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_lu_square_batch_with_info_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
...,
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_lu_square_many_batches_no_info_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
...,
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_lu_square_many_batches_with_info_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
...,
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.],
[0., 0., 0., ..., nan, nan, 0.]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_matrix_power_n=-2_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_solve_batched_broadcast_A_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 461, in gradcheck
"Gradients failed to compare equal for grad output = 1j. ")
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Gradients failed to compare equal for grad output = 1j. Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
analytical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
======================================================================
ERROR: test_solve_batched_broadcast_A_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_solve_batched_broadcast_b_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 461, in gradcheck
"Gradients failed to compare equal for grad output = 1j. ")
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Gradients failed to compare equal for grad output = 1j. Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
analytical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
======================================================================
ERROR: test_solve_batched_broadcast_b_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_solve_batched_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 461, in gradcheck
"Gradients failed to compare equal for grad output = 1j. ")
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Gradients failed to compare equal for grad output = 1j. Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
analytical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
======================================================================
ERROR: test_solve_batched_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
======================================================================
ERROR: test_solve_batched_dims_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 461, in gradcheck
"Gradients failed to compare equal for grad output = 1j. ")
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Gradients failed to compare equal for grad output = 1j. Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
analytical:tensor([[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
...,
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj],
[nan+nanj, nan+nanj, nan+nanj, ..., nan+nanj, nan+nanj, nan+nanj]],
device='cuda:0')
======================================================================
ERROR: test_solve_batched_dims_cuda (__main__.TestAutogradDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 874, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 295, in instantiated_test
raise rte
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
result = test_fn(self, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_device_type.py", line 581, in dep_fn
return fn(slf, device, *args, **kwargs)
File "test_autograd.py", line 5176, in do_test
check(name)
File "test_autograd.py", line 5084, in check
check_batched_grad=check_batched_grad)
File "test_autograd.py", line 4948, in run_grad_and_gradgrad_checks
check_batched_grad=check_batched_grad))
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_utils.py", line 1911, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 468, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 450, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py", line 367, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
analytical:tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
----------------------------------------------------------------------
Ran 2715 tests in 1521.851s
FAILED (errors=24, skipped=25, expected failures=1)
Total time based on python measurements: 24.651ms
CPU time measurement python side overhead: 1.23%
-------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
-------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::linear 3.59% 5.751us 100.00% 160.022us 160.022us 1
aten::t 8.04% 12.864us 10.60% 16.962us 16.962us 1
aten::transpose 1.22% 1.954us 2.56% 4.098us 4.098us 1
aten::as_strided 1.34% 2.144us 1.34% 2.144us 2.144us 1
aten::addmm 76.83% 122.942us 85.81% 137.309us 137.309us 1
aten::empty 0.54% 0.872us 0.54% 0.872us 0.872us 1
aten::expand 1.36% 2.184us 1.82% 2.915us 2.915us 1
aten::as_strided 0.46% 0.731us 0.46% 0.731us 0.731us 1
aten::copy_ 6.61% 10.580us 6.61% 10.580us 10.580us 1
-------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 160.022us
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::normal_ 26.18% 18.475us 26.18% 18.475us 9.237us 2
aten::sum 12.67% 8.945us 15.73% 11.101us 11.101us 1
aten::add 12.55% 8.857us 12.55% 8.857us 8.857us 1
aten::copy_ 9.16% 6.462us 9.16% 6.462us 3.231us 2
aten::randn 6.42% 4.530us 35.61% 25.128us 12.564us 2
torch::autograd::AccumulateGrad 5.81% 4.097us 20.00% 14.116us 7.058us 2
aten::expand 4.05% 2.856us 5.18% 3.657us 3.657us 1
aten::empty 3.85% 2.715us 3.85% 2.715us 0.905us 3
aten::empty_strided 3.39% 2.394us 3.39% 2.394us 0.798us 3
aten::new_empty_strided 2.94% 2.075us 5.04% 3.557us 1.779us 2
SumBackward0 2.80% 1.973us 7.98% 5.630us 5.630us 1
aten::ones_like 2.37% 1.674us 6.42% 4.529us 4.529us 1
aten::empty_like 2.36% 1.663us 3.65% 2.575us 2.575us 1
aten::as_strided 2.29% 1.613us 2.29% 1.613us 0.807us 2
AddBackward0 1.72% 1.212us 1.72% 1.212us 1.212us 1
aten::fill_ 1.46% 1.032us 1.46% 1.032us 0.516us 2
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 70.573us
-------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
-------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------
aten::linear 3.40% 4.309us 56.02% 71.075us 71.075us 1 [[128, 20], [30, 20], [30]]
aten::t 5.07% 6.433us 8.54% 10.831us 10.831us 1 [[30, 20]]
aten::transpose 2.57% 3.266us 3.47% 4.398us 4.398us 1 [[30, 20], [], []]
aten::as_strided 0.89% 1.132us 0.89% 1.132us 1.132us 1 [[30, 20], [], [], []]
aten::addmm 37.79% 47.940us 44.09% 55.935us 55.935us 1 [[30], [128, 20], [20, 30], [], []]
aten::empty 0.49% 0.621us 0.49% 0.621us 0.621us 1 [[], [], [], [], [], []]
aten::expand 1.40% 1.774us 1.86% 2.355us 2.355us 1 [[30], [], []]
aten::as_strided 0.46% 0.581us 0.46% 0.581us 0.581us 1 [[30], [], [], []]
aten::copy_ 3.96% 5.019us 3.96% 5.019us 5.019us 1 [[128, 30], [128, 30], []]
aten::linear 2.49% 3.156us 43.98% 55.796us 55.796us 1 [[128, 30], [40, 30], [40]]
aten::t 4.45% 5.651us 7.46% 9.468us 9.468us 1 [[40, 30]]
aten::transpose 1.99% 2.525us 3.01% 3.817us 3.817us 1 [[40, 30], [], []]
aten::as_strided 1.02% 1.292us 1.02% 1.292us 1.292us 1 [[40, 30], [], [], []]
aten::addmm 26.19% 33.232us 34.03% 43.172us 43.172us 1 [[40], [128, 30], [30, 40], [], []]
aten::empty 0.70% 0.882us 0.70% 0.882us 0.882us 1 [[], [], [], [], [], []]
aten::expand 1.47% 1.864us 2.01% 2.545us 2.545us 1 [[40], [], []]
aten::as_strided 0.54% 0.681us 0.54% 0.681us 0.681us 1 [[40], [], [], []]
aten::copy_ 5.13% 6.513us 5.13% 6.513us 6.513us 1 [[128, 40], [128, 40], []]
-------------------- ------------ ------------ ------------ ------------ ------------ ------------ ---------------------------------------
Self CPU time total: 126.871us
test_autograd failed!
Running benchmark_utils/test_benchmark_utils ... [2021-04-23 13:17:52.953244]
Executing ['/usr/bin/python3', 'benchmark_utils/test_benchmark_utils.py', '-v'] ... [2021-04-23 13:17:52.953279]
test_adaptive_timer (__main__.TestBenchmarkUtils) ... ok
test_collect_callgrind (__main__.TestBenchmarkUtils) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_collect_cpp_callgrind (__main__.TestBenchmarkUtils) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_compare (__main__.TestBenchmarkUtils) ... ok
test_cpp_timer (__main__.TestBenchmarkUtils) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_fuzzer (__main__.TestBenchmarkUtils) ... ok
test_manipulate_callgrind_stats (__main__.TestBenchmarkUtils) ... ok
test_timer (__main__.TestBenchmarkUtils) ... ok
----------------------------------------------------------------------
Ran 8 tests in 1.265s
OK (skipped=3)
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_binary_ufuncs ... [2021-04-23 13:17:55.195354]
Executing ['/usr/bin/python3', 'test_binary_ufuncs.py', '-v'] ... [2021-04-23 13:17:55.195400]
test___add___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___and___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___divmod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___eq___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___floordiv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ge___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___gt___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___iadd___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___iand___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___idivmod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ifloordiv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ilshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imatmul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___imul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ior___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ipow___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___irshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___isub___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___itruediv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ixor___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___le___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___lshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___lt___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___matmul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___mod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___mul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ne___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___or___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___pow___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___radd___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rand___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rdivmod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rfloordiv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rlshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmatmul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmod___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rmul___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___ror___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rpow___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rrshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rshift___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rsub___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rtruediv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___rxor___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___sub___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___truediv___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test___xor___not_implemented_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_add_broadcast_empty_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_add_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_add_with_tail_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_addcmul_scalars_as_floats_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_atan2_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_atan2_edgecases_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_binary_op_mem_overlap_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_binary_op_scalar_device_unspecified_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'fewer than 2 devices detected'
test_binary_ops_with_scalars_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_bitwise_and_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_bitwise_ops_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_bitwise_or_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_bitwise_xor_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_bool_tensor_comparison_ops_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_cast_binary_op_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_cdiv_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_cmul_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_check_for_scalar_overflow_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_check_for_zerodim_tensor_overflow_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_must_take_bool_output_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bfloat16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bfloat16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bool_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_bool_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_complex_scalar_pow_tensor_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_copysign_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_cpow_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_cpu_tensor_pow_cuda_scalar_tensor_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_cremainder_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_cross_device_binary_ops_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'fewer than 2 devices detected'
test_cross_device_inplace_error_msg_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_csub_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_cuda_tensor_pow_scalar_tensor_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_div_and_floordiv_script_vs_python_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_div_and_floordiv_vs_python_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_div_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_modes_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_nonfinite_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_nonfinite_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_nonfinite_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_nonfinite_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_div_rounding_numpy_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_divmul_scalar_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_float_power_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:2454: RuntimeWarning: invalid value encountered in float_power
expected_scalar_base = torch.from_numpy(np.float_power(i, to_np(exp)))
ok
test_float_power_cpu_bfloat16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex128_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_complex64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_float_power_exceptions_cpu (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:2480: UserWarning: An output with one or more elements was resized since it had shape [1], which does not match the required output shape [5].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /pytorch/aten/src/ATen/native/Resize.cpp:19.)
torch.float_power(base, exp, out=out)
ok
test_float_scalar_pow_float_tensor_cpu (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:651: RuntimeWarning: invalid value encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
test_binary_ufuncs.py:651: RuntimeWarning: divide by zero encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
ok
test_floor_divide_out_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_out_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_scalar_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_tensor_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_zero_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_zero_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_zero_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_zero_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_floor_divide_zero_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_float_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_float_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_float_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_integral_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_integral_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_integral_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_integral_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_by_zero_integral_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_fmod_remainder_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_gcd_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_gcd_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_gcd_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_gcd_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_gcd_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_complex_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_complex_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_complex_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_complex_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_heaviside_cross_device_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_hypot_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_hypot_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_hypot_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_idiv_and_ifloordiv_vs_python_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_inplace_comparison_ops_require_inputs_have_same_dtype_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_inplace_division_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_inplace_dunders_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_int_pow_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_int_tensor_pow_neg_ints_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_lcm_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_lcm_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_lcm_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_ldexp_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_lerp_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_logaddexp2_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logaddexp2_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logaddexp_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logaddexp_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_bfloat16 (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:1977: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at /pytorch/aten/src/ATen/native/Copy.cpp:219.)
self.assertEqual(expected_res.bool(), getattr(a, op)(b))
ok
test_logical_and_cpu_complex128_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex128_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_complex64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_and_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex128_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_complex64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_or_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex128_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_complex64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_logical_xor_with_nontrivial_alignment_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_long_tensor_pow_floats_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex128_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_complex_cpu_complex64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_cross_device_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Only runs on cuda'
test_maximum_minimum_float_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_nan_and_inf_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_nan_and_inf_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_nan_and_inf_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_float_nan_and_inf_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_int_and_bool_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bfloat16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_maximum_minimum_type_promotion_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_min_max_binary_op_nan_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_min_max_binary_op_nan_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_mul_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_mul_intertype_scalar_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_mul_intertype_scalar_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_muldiv_scalar_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_nextafter_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_nextafter_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_out_resize_warning_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_pow_cpu (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:637: UserWarning: An output with one or more elements was resized since it had shape [1], which does not match the required output shape [100, 100].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /pytorch/aten/src/ATen/native/Resize.cpp:19.)
torch.pow(m1, 1, out=out)
ok
test_pow_scalar_overloads_mem_overlap_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_rdiv_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_remainder_fmod_large_dividend_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_remainder_fmod_large_dividend_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_remainder_overflow_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_rpow_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_signed_shift_cpu_int16 (__main__.TestBinaryUfuncsCPU)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cpu_int32 (__main__.TestBinaryUfuncsCPU)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cpu_int64 (__main__.TestBinaryUfuncsCPU)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cpu_int8 (__main__.TestBinaryUfuncsCPU)
Ensure that signed integer bit shifting works as expected. ... ok
test_sub_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_bool (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_complex128 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_complex64 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_float16 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_float64 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_int16 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_int32 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_int64 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_int8 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_cpu_uint8 (__main__.TestBinaryUfuncsCPU) ... ok
test_sub_typing_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_tensor_pow_tensor_cpu (__main__.TestBinaryUfuncsCPU) ... test_binary_ufuncs.py:651: RuntimeWarning: invalid value encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
test_binary_ufuncs.py:651: RuntimeWarning: divide by zero encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
ok
test_trapz_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test_true_divide_out_cpu_bfloat16 (__main__.TestBinaryUfuncsCPU) ... ok
test_true_divide_out_cpu_float32 (__main__.TestBinaryUfuncsCPU) ... ok
test_xlogy_bfloat16_cpu (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_bool_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float16_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float32_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_float64_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int16_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int32_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int64_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_int8_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_bool (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_float16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_float32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_float64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_int16 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_int32 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_int64 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_int8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_cpu_uint8_uint8 (__main__.TestBinaryUfuncsCPU) ... skipped 'Scipy required for the test.'
test_xlogy_scalar_type_promotion_cpu (__main__.TestBinaryUfuncsCPU) ... ok
test___add___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___add___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___and___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___divmod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___eq___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___floordiv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ge___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___gt___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iadd___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___iand___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___idivmod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ifloordiv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ilshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imatmul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___imul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ior___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ipow___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___irshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___isub___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___itruediv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ixor___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___le___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___lt___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___matmul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___mul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ne___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___or___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___pow___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___radd___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rand___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rdivmod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rfloordiv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rlshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmatmul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmod___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rmul___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___ror___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rpow___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rrshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rshift___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rsub___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rtruediv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___rxor___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___sub___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___truediv___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test___xor___not_implemented_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_add_broadcast_empty_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_add_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_add_with_tail_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_addcmul_scalars_as_floats_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_atan2_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_atan2_edgecases_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_binary_op_mem_overlap_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_binary_op_scalar_device_unspecified_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'fewer than 2 devices detected'
test_binary_ops_with_scalars_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_bitwise_and_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_bitwise_ops_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_bitwise_or_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_bitwise_xor_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_bool_tensor_comparison_ops_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_cast_binary_op_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_cdiv_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_cmul_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_comparison_ops_check_for_scalar_overflow_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_comparison_ops_check_for_zerodim_tensor_overflow_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_comparison_ops_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_comparison_ops_must_take_bool_output_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_comparison_ops_type_promotion_and_broadcasting_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bfloat16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bfloat16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bool_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_bool_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_comparison_ops_type_promotion_and_broadcasting_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_complex_scalar_pow_tensor_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_copysign_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_cpow_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_cpu_tensor_pow_cuda_scalar_tensor_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_cremainder_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_cross_device_binary_ops_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'fewer than 2 devices detected'
test_cross_device_inplace_error_msg_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_csub_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_cuda_tensor_pow_scalar_tensor_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_and_floordiv_script_vs_python_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_and_floordiv_vs_python_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_modes_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_nonfinite_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_nonfinite_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_nonfinite_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_nonfinite_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_div_rounding_numpy_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_divmul_scalar_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... test_binary_ufuncs.py:2454: RuntimeWarning: invalid value encountered in float_power
expected_scalar_base = torch.from_numpy(np.float_power(i, to_np(exp)))
ok
test_float_power_cuda_bfloat16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex128_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_complex64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_float_power_exceptions_cuda (__main__.TestBinaryUfuncsCUDA) ... test_binary_ufuncs.py:2480: UserWarning: An output with one or more elements was resized since it had shape [1], which does not match the required output shape [5].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /pytorch/aten/src/ATen/native/Resize.cpp:19.)
torch.float_power(base, exp, out=out)
ok
test_float_scalar_pow_float_tensor_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_out_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_out_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_scalar_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_tensor_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_floor_divide_zero_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_floor_divide_zero_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_floor_divide_zero_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_floor_divide_zero_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_floor_divide_zero_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_fmod_remainder_by_zero_float_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_by_zero_float_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_by_zero_float_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_by_zero_integral_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_fmod_remainder_by_zero_integral_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_fmod_remainder_by_zero_integral_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_fmod_remainder_by_zero_integral_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_fmod_remainder_by_zero_integral_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped "test doesn't currently work on the ROCm stack"
test_fmod_remainder_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_fmod_remainder_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_gcd_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_gcd_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_gcd_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_gcd_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_gcd_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_complex_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_complex_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_complex_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_complex_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cross_device_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_heaviside_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_hypot_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_hypot_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_idiv_and_ifloordiv_vs_python_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_inplace_comparison_ops_require_inputs_have_same_dtype_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_inplace_division_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_inplace_dunders_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_int_pow_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_int_tensor_pow_neg_ints_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_lcm_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_lcm_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_lcm_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_ldexp_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_lerp_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_logaddexp2_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logaddexp2_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logaddexp_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logaddexp_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex128_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_complex64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_and_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex128_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_complex64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_or_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex128_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_complex64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_logical_xor_with_nontrivial_alignment_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_long_tensor_pow_floats_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex128_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_complex_cuda_complex64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_cross_device_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_nan_and_inf_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_nan_and_inf_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_nan_and_inf_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_float_nan_and_inf_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_int_and_bool_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bfloat16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_maximum_minimum_type_promotion_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_min_max_binary_op_nan_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_min_max_binary_op_nan_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_min_max_binary_op_nan_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_mul_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_mul_intertype_scalar_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_mul_intertype_scalar_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_mul_intertype_scalar_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_bool (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_complex128 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_complex64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_muldiv_scalar_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... ok
test_nextafter_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_nextafter_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_out_resize_warning_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_pow_cuda (__main__.TestBinaryUfuncsCUDA) ... test_binary_ufuncs.py:637: UserWarning: An output with one or more elements was resized since it had shape [1], which does not match the required output shape [100, 100].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /pytorch/aten/src/ATen/native/Resize.cpp:19.)
torch.pow(m1, 1, out=out)
ok
test_pow_scalar_overloads_mem_overlap_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_rdiv_cuda_complex128 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_complex64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_rdiv_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_remainder_fmod_large_dividend_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_remainder_fmod_large_dividend_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... ok
test_remainder_overflow_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_rpow_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_signed_shift_cuda_int16 (__main__.TestBinaryUfuncsCUDA)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cuda_int32 (__main__.TestBinaryUfuncsCUDA)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cuda_int64 (__main__.TestBinaryUfuncsCUDA)
Ensure that signed integer bit shifting works as expected. ... ok
test_signed_shift_cuda_int8 (__main__.TestBinaryUfuncsCUDA)
Ensure that signed integer bit shifting works as expected. ... ok
test_sub_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_complex128 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_complex64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_cuda_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Only runs on cpu'
test_sub_typing_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_tensor_pow_tensor_cuda (__main__.TestBinaryUfuncsCUDA) ... test_binary_ufuncs.py:651: RuntimeWarning: invalid value encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
test_binary_ufuncs.py:651: RuntimeWarning: divide by zero encountered in power
np_res = np.power(to_np(base), to_np(np_exponent))
ok
test_trapz_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
test_true_divide_out_cuda_bfloat16 (__main__.TestBinaryUfuncsCUDA) ... ok
test_true_divide_out_cuda_float32 (__main__.TestBinaryUfuncsCUDA) ... ok
test_xlogy_bfloat16_cuda (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_bool_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float16_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float32_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_float64_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int16_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int32_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int64_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_int8_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_bool (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_float16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_float32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_float64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_int16 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_int32 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_int64 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_int8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_cuda_uint8_uint8 (__main__.TestBinaryUfuncsCUDA) ... skipped 'Scipy required for the test.'
test_xlogy_scalar_type_promotion_cuda (__main__.TestBinaryUfuncsCUDA) ... ok
----------------------------------------------------------------------
Ran 3275 tests in 65.675s
OK (skipped=218)
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_bundled_inputs ... [2021-04-23 13:19:01.774678]
Executing ['/usr/bin/python3', 'test_bundled_inputs.py', '-v'] ... [2021-04-23 13:19:01.774725]
test_large_tensor_with_inflation (__main__.TestBundledInputs) ... ok
test_multiple_methods_with_inputs (__main__.TestBundledInputs) ... ok
test_multiple_methods_with_inputs_failures (__main__.TestBundledInputs) ... ok
test_non_tensors (__main__.TestBundledInputs) ... ok
test_rejected_tensors (__main__.TestBundledInputs) ... ok
test_single_tensors (__main__.TestBundledInputs) ... ok
----------------------------------------------------------------------
Ran 6 tests in 0.301s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_complex ... [2021-04-23 13:19:02.744074]
Executing ['/usr/bin/python3', 'test_complex.py', '-v'] ... [2021-04-23 13:19:02.744121]
test_dtype_inference_cpu_float32 (__main__.TestComplexTensorCPU) ... ok
test_dtype_inference_cpu_float64 (__main__.TestComplexTensorCPU) ... ok
test_to_list_cpu_complex128 (__main__.TestComplexTensorCPU) ... ok
test_to_list_cpu_complex64 (__main__.TestComplexTensorCPU) ... ok
test_dtype_inference_cuda_float32 (__main__.TestComplexTensorCUDA) ... ok
test_dtype_inference_cuda_float64 (__main__.TestComplexTensorCUDA) ... ok
test_to_list_cuda_complex128 (__main__.TestComplexTensorCUDA) ... ok
test_to_list_cuda_complex64 (__main__.TestComplexTensorCUDA) ... ok
----------------------------------------------------------------------
Ran 8 tests in 0.221s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_cpp_api_parity ... [2021-04-23 13:19:03.759937]
Executing ['/usr/bin/python3', 'test_cpp_api_parity.py', '-v'] ... [2021-04-23 13:19:03.759984]
Fail to import hypothesis in common_utils, tests are not derandomized
test_torch_nn_AdaptiveAvgPool1d (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool1d_one_output (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool1d_one_output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_single (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_single_1x1output (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_single_1x1output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_single_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_tuple (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_tuple_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_tuple_none (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_single (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_single_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_tuple (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_tuple_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_tuple_none (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool1d (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_single (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_single_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_tuple (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_tuple_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_tuple_none (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool2d_tuple_none_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_single (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_single_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_single_nonatomic (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_single_nonatomic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple_nonatomic (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple_nonatomic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple_none (__main__.TestCppApiParity) ... ok
test_torch_nn_AdaptiveMaxPool3d_tuple_none_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d_stride_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool1d_stride_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor_stride_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_divisor_stride_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_stride_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool2d_stride_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride1_pad0_gpu_input (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride1_pad0_gpu_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_general_output (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_general_output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride1_pad0_gpu_input (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride1_pad0_gpu_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_fixedkw_output (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_fixedkw_output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_general_output (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_general_output_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_input_nooverlap (__main__.TestCppApiParity) ... ok
test_torch_nn_AvgPool3d_stride_pad_gpu_input_nooverlap_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss_scalar_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss_scalar_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_BCELoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss_scalar_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss_scalar_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_BCEWithLogitsLoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_3d_input (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_3d_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_3d_input_not_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_3d_input_not_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_affine_simple_average (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_affine_simple_average_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_not_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_not_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_not_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm1d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_2d_simple_average (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_2d_simple_average_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_momentum (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_momentum_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_not_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_not_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_not_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm2d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_3d_simple_average (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_3d_simple_average_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_momentum (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_momentum_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_not_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_not_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_not_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_BatchNorm3d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CELU (__main__.TestCppApiParity) ... ok
test_torch_nn_CELU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CELU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_CELU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_2d_int_target_lengths_tensors (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_2d_int_target_lengths_tensors_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_2d_lengths_tensors (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_2d_lengths_tensors_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_lengths_tensors (__main__.TestCppApiParity) ... ok
test_torch_nn_CTCLoss_lengths_tensors_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad1d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad1d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad1d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad2d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad2d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad2d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad3d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad3d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad3d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConstantPad3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_groups (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_groups_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad1 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad1_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad1size1 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad1size1_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad2size1 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_pad2size1_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_reflect_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_reflect_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv1d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_padded (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_padded_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_strided (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_strided_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_with_multiplier (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_depthwise_with_multiplier_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_groups (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_groups_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_groups_thnn (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_groups_thnn_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_padding (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_padding_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_reflect_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_reflect_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_strided (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_strided_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv2d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_1x1x1_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_1x1x1_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_dilated_strided (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_dilated_strided_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_groups (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_groups_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_stride_padding (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_stride_padding_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_zero_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_zero_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok
test_torch_nn_Conv3d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_groups (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_groups_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose1d_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_groups (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_groups_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose2d_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose3d (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose3d_dilated (__main__.TestCppApiParity) ... ok
test_torch_nn_ConvTranspose3d_dilated_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CosineEmbeddingLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_CosineEmbeddingLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CosineEmbeddingLoss_margin (__main__.TestCppApiParity) ... ok
test_torch_nn_CosineEmbeddingLoss_margin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossEntropyLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossEntropyLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossEntropyLoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossEntropyLoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossMapLRN2d (__main__.TestCppApiParity) ... ok
test_torch_nn_CrossMapLRN2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ELU (__main__.TestCppApiParity) ... ok
test_torch_nn_ELU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ELU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_ELU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Embedding (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_max (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_max_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_mean (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_mean_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_sparse (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_sparse_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_sum (__main__.TestCppApiParity) ... ok
test_torch_nn_EmbeddingBag_sum_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Embedding_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Embedding_sparse (__main__.TestCppApiParity) ... ok
test_torch_nn_Embedding_sparse_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Flatten (__main__.TestCppApiParity) ... ok
test_torch_nn_Flatten_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Fold (__main__.TestCppApiParity) ... ok
test_torch_nn_Fold_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Fold_int_input (__main__.TestCppApiParity) ... ok
test_torch_nn_Fold_int_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_ratio (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_ratio_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_size (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool2d_size_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_asymsize (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_asymsize_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_ratio (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_ratio_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_size (__main__.TestCppApiParity) ... ok
test_torch_nn_FractionalMaxPool3d_size_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GELU (__main__.TestCppApiParity) ... ok
test_torch_nn_GELU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GELU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_GELU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GLU (__main__.TestCppApiParity) ... ok
test_torch_nn_GLU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GLU_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_GLU_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine_GN (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine_GN_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine_large_batch (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_affine_large_batch_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_no_affine_IN (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_no_affine_IN_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_no_affine_LN (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_1d_no_affine_LN_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_affine_large_feature (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_affine_large_feature_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_no_affine_IN (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_no_affine_IN_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_no_affine_LN (__main__.TestCppApiParity) ... ok
test_torch_nn_GroupNorm_2d_no_affine_LN_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardshrink (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardshrink_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardshrink_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardshrink_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardtanh (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardtanh_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardtanh_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Hardtanh_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss_margin (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss_margin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss_scalar_margin (__main__.TestCppApiParity) ... ok
test_torch_nn_HingeEmbeddingLoss_scalar_margin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm1d (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm1d_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm1d_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm2d (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm2d_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm2d_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm3d (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm3d_tracking_stats (__main__.TestCppApiParity) ... ok
test_torch_nn_InstanceNorm3d_tracking_stats_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss (__main__.TestCppApiParity) ... /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2611: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release.
"reduction: 'mean' divides the total loss by both the batch size and the support size."
/home/luke/Projects/Neural/pytorch/test/cpp_api_parity/module_impl_check.py:149: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. (Triggered internally at /pytorch/torch/csrc/api/include/torch/nn/functional/loss.h:57.)
cpp_test_fn(arg_dict_file_path, module_file_path, forward_output_file_path, backward_grad_dict_file_path)
ok
test_torch_nn_KLDivLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_log_target (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_log_target_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_scalar_log_target (__main__.TestCppApiParity) ... ok
test_torch_nn_KLDivLoss_scalar_log_target_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_L1Loss (__main__.TestCppApiParity) ... ok
test_torch_nn_L1Loss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_L1Loss_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_L1Loss_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool1d (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool1d_norm (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool1d_norm_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool2d (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool2d_norm (__main__.TestCppApiParity) ... ok
test_torch_nn_LPPool2d_norm_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_elementwise_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_elementwise_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_empty_elementwise_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_empty_elementwise_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_no_elementwise_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_1d_no_elementwise_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_3d_elementwise_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_3d_elementwise_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_3d_no_elementwise_affine (__main__.TestCppApiParity) ... ok
test_torch_nn_LayerNorm_3d_no_elementwise_affine_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_negval (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_negval_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_negval_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_negval_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_zero_negval (__main__.TestCppApiParity) ... ok
test_torch_nn_LeakyReLU_with_zero_negval_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Linear (__main__.TestCppApiParity) ... ok
test_torch_nn_Linear_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Linear_no_bias (__main__.TestCppApiParity) ... ok
test_torch_nn_Linear_no_bias_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_2d_uneven_pad (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_2d_uneven_pad_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_3d_custom_params (__main__.TestCppApiParity) ... ok
test_torch_nn_LocalResponseNorm_3d_custom_params_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSigmoid (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSigmoid_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSigmoid_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSigmoid_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax_multiparam (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax_multiparam_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax_multiparam_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_LogSoftmax_multiparam_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss_prec (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss_prec_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_MSELoss_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MarginRankingLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_MarginRankingLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MarginRankingLoss_margin (__main__.TestCppApiParity) ... ok
test_torch_nn_MarginRankingLoss_margin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool1d (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool1d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool1d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool2d_3d_input (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool2d_3d_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool2d_4d_input (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool2d_4d_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_stride (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_stride_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_stride_padding (__main__.TestCppApiParity) ... ok
test_torch_nn_MaxPool3d_stride_padding_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelMarginLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelMarginLoss_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelMarginLoss_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelMarginLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelSoftMarginLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelSoftMarginLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelSoftMarginLoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiLabelSoftMarginLoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_margin (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_margin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_p (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_p_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_MultiMarginLoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_2d_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_dim_is_3 (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_dim_is_3_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_higher_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_higher_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights_ignore_index_neg (__main__.TestCppApiParity) ... ok
test_torch_nn_NLLLoss_weights_ignore_index_neg_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_1d_multiparam (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_1d_multiparam_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_2d_multiparam (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_2d_multiparam_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_3d_multiparam (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_3d_multiparam_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_PReLU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PixelShuffle (__main__.TestCppApiParity) ... ok
test_torch_nn_PixelShuffle_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PixelUnshuffle (__main__.TestCppApiParity) ... ok
test_torch_nn_PixelUnshuffle_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_full_loss (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_full_loss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_full_loss_no_log_input (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_full_loss_no_log_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_no_full_loss (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_no_full_loss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_no_full_loss_no_log_input (__main__.TestCppApiParity) ... ok
test_torch_nn_PoissonNLLLoss_no_full_loss_no_log_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_RReLU (__main__.TestCppApiParity) ... ok
test_torch_nn_RReLU_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_RReLU_with_up_down (__main__.TestCppApiParity) ... ok
test_torch_nn_RReLU_with_up_down_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_RReLU_with_up_down_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_RReLU_with_up_down_scalar_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_ReLU (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU6 (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU6_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU6_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU6_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_ReLU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReflectionPad2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ReplicationPad3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SELU (__main__.TestCppApiParity) ... ok
test_torch_nn_SELU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SELU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_SELU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SampleModule_has_parity (__main__.TestCppApiParity) ... ok
test_torch_nn_SampleModule_has_parity_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SampleModule_no_parity (__main__.TestCppApiParity) ... expected failure
test_torch_nn_SampleModule_no_parity_cuda (__main__.TestCppApiParity) ... expected failure
test_torch_nn_SiLU (__main__.TestCppApiParity) ... ok
test_torch_nn_SiLU_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SiLU_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_SiLU_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Sigmoid (__main__.TestCppApiParity) ... ok
test_torch_nn_Sigmoid_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Sigmoid_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Sigmoid_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SmoothL1Loss (__main__.TestCppApiParity) ... ok
test_torch_nn_SmoothL1Loss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SmoothL1Loss_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_SmoothL1Loss_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_SoftMarginLoss (__main__.TestCppApiParity) ... ok
test_torch_nn_SoftMarginLoss_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax2d (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmax_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin_multidim (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin_multidim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Softmin_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta_threshold (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta_threshold_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta_threshold_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_beta_threshold_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softplus_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink_lambda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink_lambda_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink_lambda_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Softshrink_lambda_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softsign (__main__.TestCppApiParity) ... ok
test_torch_nn_Softsign_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Softsign_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Softsign_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanh (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanh_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanh_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanh_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanhshrink (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanhshrink_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanhshrink_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Tanhshrink_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_large_value (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_large_value_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_threshold_value (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_threshold_value_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_threshold_value_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_Threshold_threshold_value_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerDecoderLayer_gelu_activation (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerDecoderLayer_gelu_activation_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerDecoderLayer_relu_activation (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerDecoderLayer_relu_activation_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerEncoderLayer_gelu_activation (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerEncoderLayer_gelu_activation_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerEncoderLayer_relu_activation (__main__.TestCppApiParity) ... ok
test_torch_nn_TransformerEncoderLayer_relu_activation_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Transformer_multilayer_coder (__main__.TestCppApiParity) ... ok
test_torch_nn_Transformer_multilayer_coder_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Unfold (__main__.TestCppApiParity) ... ok
test_torch_nn_Unfold_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_Unfold_int_input (__main__.TestCppApiParity) ... ok
test_torch_nn_Unfold_int_input_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d_negative_dims (__main__.TestCppApiParity) ... ok
test_torch_nn_ZeroPad2d_negative_dims_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_weights_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_weights_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCELoss_weights_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCEWithLogitsLoss_legacy_enum (__main__.TestCppApiParity) ... /usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
ok
test_torch_nn_functional_BCEWithLogitsLoss_legacy_enum_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCEWithLogitsLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_HingeEmbeddingLoss_margin_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_HingeEmbeddingLoss_margin_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_HingeEmbeddingLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_HingeEmbeddingLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_log_target (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_log_target_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_scalar_log_target (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_no_reduce_scalar_log_target_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_with_log_target_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_with_log_target_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_with_target_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_KLDivLoss_with_target_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce_complex (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce_complex_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_L1Loss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MSELoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MSELoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MSELoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MSELoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_0d_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_0d_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_1d_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_1d_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_index_neg (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_index_neg_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelSoftMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelSoftMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelSoftMarginLoss_weights_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiLabelSoftMarginLoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_1d_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_1d_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_margin_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_margin_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_p_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_p_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_weights_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_MultiMarginLoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss2d_no_reduce_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLossNd_no_reduce_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_neg (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_neg_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding122112_3dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding122112_3dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding1221_2dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding1221_2dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding12_1dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding12_1dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding2322_2dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding2322_2dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding31_1dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding31_1dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding322112_3dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding322112_3dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding332122_3dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding332122_3dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding3331_2dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding3331_2dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding33_1dcircular (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_Padding33_1dcircular_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_PoissonNLLLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_PoissonNLLLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_beta (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_beta_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_no_reduce_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_zero_beta (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SmoothL1Loss_zero_beta_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SoftMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_SoftMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_2d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_shared_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_shared_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_tuple_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_tuple_2d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_tuple_2d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bicubic_tuple_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_2d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_shared_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_shared_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_tuple_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_tuple_2d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_tuple_2d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_bilinear_tuple_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_1d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_scale_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_scale_1d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_scale_1d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_scale_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_tuple_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_linear_tuple_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_1d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_1d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d_launch_configs (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d_launch_configs_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_2d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_3d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_3d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_scale_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_1d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_1d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_2d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_2d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_nearest_tuple_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d_alert_nondeterministic (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d_alert_nondeterministic_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d_zero_dim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_3d_zero_dim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_scale_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_scale_3d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_scale_3d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_scale_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_tuple_3d (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_tuple_3d_align_corners (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_tuple_3d_align_corners_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_interpolate_trilinear_tuple_3d_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_dim0 (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_dim0_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_dim3 (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_dim3_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_lastdim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_lastdim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_scalar_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_spatial (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_spatial_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_spatial_special (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_log_softmax_spatial_special_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_multimarginloss_1d_input_0d_target_no_reduce (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_multimarginloss_1d_input_0d_target_no_reduce_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_sample_functional_has_parity (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_sample_functional_has_parity_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_sample_functional_no_parity (__main__.TestCppApiParity) ... expected failure
test_torch_nn_functional_sample_functional_no_parity_cuda (__main__.TestCppApiParity) ... expected failure
test_torch_nn_functional_softmax_functional_dim0 (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_functional_dim0_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_functional_softmax_functional_dim3 (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_functional_dim3_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_functional_softmax_functional_scalar (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_functional_scalar_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_functional_softmax_lastdim (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_lastdim_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_lastdim_dtype (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_lastdim_dtype_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_functional_softmax_spatial (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_spatial_cuda (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_spatial_dtype (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_spatial_dtype_cuda (__main__.TestCppApiParity) ... skipped 'Excluded from CUDA tests'
test_torch_nn_functional_softmax_spatial_special (__main__.TestCppApiParity) ... ok
test_torch_nn_functional_softmax_spatial_special_cuda (__main__.TestCppApiParity) ... ok
----------------------------------------------------------------------
Ran 884 tests in 12.588s
OK (skipped=8, expected failures=4)
Running test_cpp_extensions_aot_no_ninja ... [2021-04-23 13:19:47.097836]
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp -> None ignored
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
Total number of unsupported CUDA function calls: 0
Total number of replaced kernel launches: 5
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp -> None ignored
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
Total number of unsupported CUDA function calls: 0
Total number of replaced kernel launches: 3
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torch_test_cpp_extension
copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-3.6/torch_test_cpp_extension
running build_ext
building 'torch_test_cpp_extension.cpp' extension
creating build/temp.linux-x86_64-3.6
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c extension.cpp -o build/temp.linux-x86_64-3.6/extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cpp.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.msnpu' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c msnpu_extension.cpp -o build/temp.linux-x86_64-3.6/msnpu_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=msnpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from msnpu_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/msnpu_extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/msnpu.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.rng' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c rng_extension.cpp -o build/temp.linux-x86_64-3.6/rng_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from rng_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:8:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256_base.h:888:0: warning: ignoring #pragma unroll [-Wunknown-pragmas]
# pragma unroll
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:50:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::quint8&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::quint8& val) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:46:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::qint8&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::qint8& val) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:42:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::qint32&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::qint32& val) {
^~~~~~~~
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256_base.h:24:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:8,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1122:9: warning: ‘T abs_impl(T) [with T = unsigned char]’ defined but not used [-Wunused-function]
uint8_t abs_impl(uint8_t v) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1110:11: warning: ‘scalar_t calc_igammac(scalar_t, scalar_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
c10::Half calc_igammac<c10::Half>(c10::Half a, c10::Half x) {
^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1105:15: warning: ‘scalar_t calc_igammac(scalar_t, scalar_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
c10::BFloat16 calc_igammac<c10::BFloat16>(c10::BFloat16 a, c10::BFloat16 x) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1100:11: warning: ‘scalar_t calc_igamma(scalar_t, scalar_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
c10::Half calc_igamma<c10::Half>(c10::Half a, c10::Half x) {
^~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1095:15: warning: ‘scalar_t calc_igamma(scalar_t, scalar_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
c10::BFloat16 calc_igamma<c10::BFloat16>(c10::BFloat16 a, c10::BFloat16 x) {
^~~~~~~~~~~~~~~~~~~~~~~~~~
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/rng_extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/rng.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.cuda' extension
creating build/temp.linux-x86_64-3.6/home
creating build/temp.linux-x86_64-3.6/home/luke
creating build/temp.linux-x86_64-3.6/home/luke/Projects
creating build/temp.linux-x86_64-3.6/home/luke/Projects/Neural
creating build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch
creating build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test
creating build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions
/opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip -o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
/opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip -o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -L/opt/rocm/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -lamdhip64 -lc10_hip -ltorch_hip -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cuda.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.torch_library' extension
/opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -Iself_compiler_include_dirs_test -I/usr/include/python3.6m -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -o build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=torch_library -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -L/opt/rocm/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -lamdhip64 -lc10_hip -ltorch_hip -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/torch_library.cpython-36m-x86_64-linux-gnu.so
running install_lib
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cpp.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/torch_library.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/msnpu.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/rng.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cuda.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
running install_egg_info
running egg_info
writing torch_test_cpp_extension.egg-info/PKG-INFO
writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt
writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt
reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt'
writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt'
removing './install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension-0.0.0-py3.6.egg-info' (and everything under it)
Copying torch_test_cpp_extension.egg-info to ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension-0.0.0-py3.6.egg-info
running install_scripts
Successfully preprocessed all matching files.
Successfully preprocessed all matching files.
running install
running build
running build_ext
building 'no_python_abi_suffix_test' extension
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-3.6/no_python_abi_suffix_test.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/no_python_abi_suffix_test.so
running install_lib
copying build/lib.linux-x86_64-3.6/no_python_abi_suffix_test.so -> ./install/usr/local/lib/python3.6/dist-packages
running install_egg_info
running egg_info
writing no_python_abi_suffix_test.egg-info/PKG-INFO
writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt
writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt
reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt'
writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt'
removing './install/usr/local/lib/python3.6/dist-packages/no_python_abi_suffix_test-0.0.0-py3.6.egg-info' (and everything under it)
Copying no_python_abi_suffix_test.egg-info to ./install/usr/local/lib/python3.6/dist-packages/no_python_abi_suffix_test-0.0.0-py3.6.egg-info
running install_scripts
Executing ['/usr/bin/python3', 'test_cpp_extensions_aot.py', '-v'] ... [2021-04-23 13:21:34.183636]
test_backward (__main__.TestCppExtensionAOT) ... ok
test_cuda_extension (__main__.TestCppExtensionAOT) ... ok
test_extension_function (__main__.TestCppExtensionAOT) ... ok
test_extension_module (__main__.TestCppExtensionAOT) ... ok
test_no_python_abi_suffix_sets_the_correct_library_name (__main__.TestCppExtensionAOT) ... ok
test_optional (__main__.TestCppExtensionAOT) ... ok
test_add (__main__.TestMSNPUTensor) ... ok
test_conv_backend_override (__main__.TestMSNPUTensor) ... ok
test_unregistered (__main__.TestMSNPUTensor) ... ok
test_zeros (__main__.TestMSNPUTensor) ... ok
test_rng (__main__.TestRNGExtension) ... ok
test_torch_library (__main__.TestTorchLibrary) ... ok
----------------------------------------------------------------------
Ran 12 tests in 0.119s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_cpp_extensions_aot_ninja ... [2021-04-23 13:21:35.371647]
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp -> None ignored
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
Total number of unsupported CUDA function calls: 0
Total number of replaced kernel launches: 5
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel2.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension_kernel.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.hpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/doubler.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp -> None ignored
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cudnn_extension_hip.cpp skipped
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_c10d_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test/tmp.h ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu ok
Total number of unsupported CUDA function calls: 0
Total number of replaced kernel launches: 3
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torch_test_cpp_extension
copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-3.6/torch_test_cpp_extension
running build_ext
building 'torch_test_cpp_extension.cpp' extension
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Successfully preprocessed all matching files.
Successfully preprocessed all matching files.
[1/1] c++ -MMD -MF /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/extension.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cpp.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.msnpu' extension
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] c++ -MMD -MF /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/msnpu_extension.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/msnpu_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=msnpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/msnpu_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/msnpu_extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/msnpu.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.rng' extension
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] c++ -MMD -MF /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/rng_extension.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/rng_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:8:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256_base.h:888:0: warning: ignoring #pragma unroll [-Wunknown-pragmas]
# pragma unroll
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:50:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::quint8&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::quint8& val) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:46:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::qint8&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::qint8& val) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:42:16: warning: ‘std::ostream& at::vec256::{anonymous}::operator<<(std::ostream&, const c10::qint32&)’ defined but not used [-Wunused-function]
std::ostream& operator<<(std::ostream& stream, const c10::qint32& val) {
^~~~~~~~
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256_base.h:24:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cpu/vec256/vec256.h:8,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/Loops.h:35,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:7,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/rng_extension.cpp:6:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1122:9: warning: ‘T abs_impl(T) [with T = unsigned char]’ defined but not used [-Wunused-function]
uint8_t abs_impl(uint8_t v) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1110:11: warning: ‘scalar_t calc_igammac(scalar_t, scalar_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
c10::Half calc_igammac<c10::Half>(c10::Half a, c10::Half x) {
^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1105:15: warning: ‘scalar_t calc_igammac(scalar_t, scalar_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
c10::BFloat16 calc_igammac<c10::BFloat16>(c10::BFloat16 a, c10::BFloat16 x) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1100:11: warning: ‘scalar_t calc_igamma(scalar_t, scalar_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
c10::Half calc_igamma<c10::Half>(c10::Half a, c10::Half x) {
^~~~~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/native/Math.h:1095:15: warning: ‘scalar_t calc_igamma(scalar_t, scalar_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
c10::BFloat16 calc_igamma<c10::BFloat16>(c10::BFloat16 a, c10::BFloat16 x) {
^~~~~~~~~~~~~~~~~~~~~~~~~~
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/rng_extension.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/rng.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.cuda' extension
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test
creating /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.o -fPIC -D__HIP_PLATFORM_HCC__=1 -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
[2/3] /opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.hip -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
[3/3] /opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.hip -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cuda -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel2.o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension_kernel.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -L/opt/rocm/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -lamdhip64 -lc10_hip -ltorch_hip -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cuda.cpython-36m-x86_64-linux-gnu.so
building 'torch_test_cpp_extension.torch_library' extension
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] /opt/rocm/bin/hipcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/lib/python3.6/dist-packages/torch/include/THH -I/opt/rocm/include -I/opt/rocm/miopen/include -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions/self_compiler_include_dirs_test -I/usr/include/python3.6m -c -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.cu -o /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_library -D_GLIBCXX_USE_CXX11_ABI=0 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -std=c++14
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/build/temp.linux-x86_64-3.6/home/luke/Projects/Neural/pytorch/test/cpp_extensions/torch_library.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -L/opt/rocm/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -lamdhip64 -lc10_hip -ltorch_hip -o build/lib.linux-x86_64-3.6/torch_test_cpp_extension/torch_library.cpython-36m-x86_64-linux-gnu.so
running install_lib
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cpp.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/torch_library.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/msnpu.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/rng.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
copying build/lib.linux-x86_64-3.6/torch_test_cpp_extension/cuda.cpython-36m-x86_64-linux-gnu.so -> ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension
running install_egg_info
running egg_info
writing torch_test_cpp_extension.egg-info/PKG-INFO
writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt
writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt
reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt'
writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt'
removing './install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension-0.0.0-py3.6.egg-info' (and everything under it)
Copying torch_test_cpp_extension.egg-info to ./install/usr/local/lib/python3.6/dist-packages/torch_test_cpp_extension-0.0.0-py3.6.egg-info
running install_scripts
running install
running build
running build_ext
building 'no_python_abi_suffix_test' extension
Emitting ninja build file /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/luke/Projects/Neural/pytorch/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-3.6/no_python_abi_suffix_test.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/no_python_abi_suffix_test.so
running install_lib
copying build/lib.linux-x86_64-3.6/no_python_abi_suffix_test.so -> ./install/usr/local/lib/python3.6/dist-packages
running install_egg_info
running egg_info
writing no_python_abi_suffix_test.egg-info/PKG-INFO
writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt
writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt
reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt'
writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt'
removing './install/usr/local/lib/python3.6/dist-packages/no_python_abi_suffix_test-0.0.0-py3.6.egg-info' (and everything under it)
Copying no_python_abi_suffix_test.egg-info to ./install/usr/local/lib/python3.6/dist-packages/no_python_abi_suffix_test-0.0.0-py3.6.egg-info
running install_scripts
Executing ['/usr/bin/python3', 'test_cpp_extensions_aot.py', '-v'] ... [2021-04-23 13:22:57.903864]
test_backward (__main__.TestCppExtensionAOT) ... ok
test_cuda_extension (__main__.TestCppExtensionAOT) ... ok
test_extension_function (__main__.TestCppExtensionAOT) ... ok
test_extension_module (__main__.TestCppExtensionAOT) ... ok
test_no_python_abi_suffix_sets_the_correct_library_name (__main__.TestCppExtensionAOT) ... ok
test_optional (__main__.TestCppExtensionAOT) ... ok
test_add (__main__.TestMSNPUTensor) ... ok
test_conv_backend_override (__main__.TestMSNPUTensor) ... ok
test_unregistered (__main__.TestMSNPUTensor) ... ok
test_zeros (__main__.TestMSNPUTensor) ... ok
test_rng (__main__.TestRNGExtension) ... ok
test_torch_library (__main__.TestTorchLibrary) ... ok
----------------------------------------------------------------------
Ran 12 tests in 0.118s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_cpp_extensions_jit ... [2021-04-23 13:22:59.058914]
Executing ['/usr/bin/python3', 'test_cpp_extensions_jit.py', '-v'] ... [2021-04-23 13:22:59.058962]
test_autograd_from_cpp (__main__.TestCppExtensionJIT) ... Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_compilation_error_formatting (__main__.TestCppExtensionJIT) ... ok
test_cpp_frontend_module_has_same_output_as_python (__main__.TestCppExtensionJIT) ... Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/cpp_frontend_extension...
Emitting ninja build file /home/luke/.cache/torch_extensions/cpp_frontend_extension/build.ninja...
Building extension module cpp_frontend_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF cpp_frontend_extension.o.d -DTORCH_EXTENSION_NAME=cpp_frontend_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cpp_frontend_extension.cpp -o cpp_frontend_extension.o
[2/2] c++ cpp_frontend_extension.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o cpp_frontend_extension.so
ok
test_cpp_frontend_module_has_up_to_date_attributes (__main__.TestCppExtensionJIT) ... ok
test_cpp_frontend_module_python_inter_op (__main__.TestCppExtensionJIT) ... ok
test_cpp_frontend_module_python_inter_op_with_cuda (__main__.TestCppExtensionJIT) ... ok
test_custom_compound_op_autograd (__main__.TestCppExtensionJIT) ... Loading extension module cpp_frontend_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module cpp_frontend_extension, skipping build step...
Loading extension module cpp_frontend_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module cpp_frontend_extension, skipping build step...
Loading extension module cpp_frontend_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module cpp_frontend_extension, skipping build step...
Loading extension module cpp_frontend_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/is_python_module...
Emitting ninja build file /home/luke/.cache/torch_extensions/is_python_module/build.ninja...
Building extension module is_python_module...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=is_python_module -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/is_python_module/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o is_python_module.so
/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py:380: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex.
f'Input #{idx} requires gradient and '
/usr/local/lib/python3.6/dist-packages/torch/autograd/gradcheck.py:380: UserWarning: Input #1 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex.
f'Input #{idx} requires gradient and '
ok
test_half_support (__main__.TestCppExtensionJIT) ... skipped 'Temporarily disabled'
test_inline_jit_compile_custom_op_cuda (__main__.TestCppExtensionJIT) ... skipped 'Temporarily disabled'
test_inline_jit_compile_extension_cuda (__main__.TestCppExtensionJIT) ... skipped 'Temporarily disabled'
test_inline_jit_compile_extension_multiple_sources_and_no_functions (__main__.TestCppExtensionJIT) ... Loading extension module is_python_module...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/inline_jit_extension...
Emitting ninja build file /home/luke/.cache/torch_extensions/inline_jit_extension/build.ninja...
Building extension module inline_jit_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/inline_jit_extension/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension.so
ok
test_inline_jit_compile_extension_throws_when_functions_is_bad (__main__.TestCppExtensionJIT) ... ok
test_inline_jit_compile_extension_with_functions_as_dict (__main__.TestCppExtensionJIT) ... Loading extension module inline_jit_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_dict...
Emitting ninja build file /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_dict/build.ninja...
Building extension module inline_jit_extension_with_functions_dict...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension_with_functions_dict -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_dict/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension_with_functions_dict.so
ok
test_inline_jit_compile_extension_with_functions_as_list (__main__.TestCppExtensionJIT) ... Loading extension module inline_jit_extension_with_functions_dict...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_list...
Emitting ninja build file /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_list/build.ninja...
Building extension module inline_jit_extension_with_functions_list...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension_with_functions_list -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/inline_jit_extension_with_functions_list/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension_with_functions_list.so
ok
test_jit_compile_extension (__main__.TestCppExtensionJIT) ... Loading extension module inline_jit_extension_with_functions_list...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/jit_extension...
Emitting ninja build file /home/luke/.cache/torch_extensions/jit_extension/build.ninja...
Building extension module jit_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF jit_extension2.o.d -DTORCH_EXTENSION_NAME=jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -g -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension2.cpp -o jit_extension2.o
[2/3] c++ -MMD -MF jit_extension.o.d -DTORCH_EXTENSION_NAME=jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -g -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/jit_extension.cpp -o jit_extension.o
[3/3] c++ jit_extension.o jit_extension2.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o jit_extension.so
ok
test_jit_cuda_archflags (__main__.TestCppExtensionJIT) ... skipped 'CUDA not found'
test_jit_cuda_extension (__main__.TestCppExtensionJIT) ... Loading extension module jit_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/torch_test_cuda_extension...
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp ok
/home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cu -> /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip skipped
Total number of unsupported CUDA function calls: 0
Total number of replaced kernel launches: 1
Detected CUDA files, patching ldflags
Emitting ninja build file /home/luke/.cache/torch_extensions/torch_test_cuda_extension/build.ninja...
Building extension module torch_test_cuda_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Successfully preprocessed all matching files.
[1/3] c++ -MMD -MF cuda_extension.o.d -DTORCH_EXTENSION_NAME=torch_test_cuda_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /opt/rocm/miopen/include -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/cuda_extension.cpp -o cuda_extension.o
[2/3] /opt/rocm/bin/hipcc -DWITH_HIP -DTORCH_EXTENSION_NAME=torch_test_cuda_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /opt/rocm/miopen/include -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -fPIC -D__HIP_PLATFORM_HCC__=1 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O2 --amdgpu-target=gfx803 --amdgpu-target=gfx900 --amdgpu-target=gfx906 --amdgpu-target=gfx908 -fno-gpu-rdc -c /home/luke/Projects/Neural/pytorch/test/cpp_extensions/hip_extension.hip -o hip_extension.cuda.o
[3/3] c++ cuda_extension.o hip_extension.cuda.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/opt/rocm/lib -lamdhip64 -o torch_test_cuda_extension.so
ok
test_jit_cudnn_extension (__main__.TestCppExtensionJIT) ... skipped 'CuDNN not found'
test_lenient_flag_handling_in_jit_extensions (__main__.TestCppExtensionJIT) ... Loading extension module torch_test_cuda_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/lenient_flag_handling_extension...
Emitting ninja build file /home/luke/.cache/torch_extensions/lenient_flag_handling_extension/build.ninja...
Building extension module lenient_flag_handling_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=lenient_flag_handling_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/luke/Projects/Neural/pytorch/test/cpp_extensions -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -g -O0 -Wall -c /home/luke/.cache/torch_extensions/lenient_flag_handling_extension/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o lenient_flag_handling_extension.so
ok
test_reload_jit_extension (__main__.TestCppExtensionJIT) ... Loading extension module lenient_flag_handling_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/reloaded_jit_extension...
Emitting ninja build file /home/luke/.cache/torch_extensions/reloaded_jit_extension/build.ninja...
Building extension module reloaded_jit_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/reloaded_jit_extension/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension.so
Loading extension module reloaded_jit_extension...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
The input conditions for extension module reloaded_jit_extension have changed. Bumping to version 1 and re-building as reloaded_jit_extension_v1...
Emitting ninja build file /home/luke/.cache/torch_extensions/reloaded_jit_extension/build.ninja...
Building extension module reloaded_jit_extension_v1...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension_v1 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/reloaded_jit_extension/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension_v1.so
Loading extension module reloaded_jit_extension_v1...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module reloaded_jit_extension_v1, skipping build step...
Loading extension module reloaded_jit_extension_v1...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
The input conditions for extension module reloaded_jit_extension have changed. Bumping to version 2 and re-building as reloaded_jit_extension_v2...
Emitting ninja build file /home/luke/.cache/torch_extensions/reloaded_jit_extension/build.ninja...
Building extension module reloaded_jit_extension_v2...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension_v2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/reloaded_jit_extension/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension_v2.so
ok
test_returns_shared_library_path_when_is_python_module_is_true (__main__.TestCppExtensionJIT) ... Loading extension module reloaded_jit_extension_v2...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
The input conditions for extension module is_python_module have changed. Bumping to version 1 and re-building as is_python_module_v1...
Emitting ninja build file /home/luke/.cache/torch_extensions/is_python_module/build.ninja...
Building extension module is_python_module_v1...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=is_python_module_v1 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/is_python_module/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o is_python_module_v1.so
ok
test_set_default_type_also_changes_aten_default_type (__main__.TestCppExtensionJIT) ... Loading extension module is_python_module_v1...
Using /home/luke/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /home/luke/.cache/torch_extensions/test_set_default_type...
Emitting ninja build file /home/luke/.cache/torch_extensions/test_set_default_type/build.ninja...
Building extension module test_set_default_type...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=test_set_default_type -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/luke/.cache/torch_extensions/test_set_default_type/main.cpp -o main.o
[2/2] c++ main.o -shared -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o test_set_default_type.so
ok
test_warning (__main__.TestCppExtensionJIT) ... Loading extension module test_set_default_type...
[W main.cpp:12] Warning: Error with CPUDoubleType (function foo)
[W main.cpp:12] Warning: Error with CPUDoubleType (function foo)
[W main.cpp:12] Warning: Error with CPUDoubleType (function foo)
[W main.cpp:12] Warning: Error with CPUDoubleType (function foo)
UserWarning: Error with torch.DoubleTensor (Triggered internally at /home/luke/.cache/torch_extensions/warn_mod/main.cpp:12.)
ok
----------------------------------------------------------------------
Ran 23 tests in 122.797s
OK (skipped=5)
Running distributed/test_c10d ... [2021-04-23 13:25:02.754916]
Executing ['/usr/bin/python3', 'distributed/test_c10d.py', '-v'] ... [2021-04-23 13:25:02.754965]
test_broadcast_coalesced_gloo_cpu (__main__.CommTest) ... Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_broadcast_coalesced_gloo_cuda (__main__.CommTest) ... ok
test_broadcast_coalesced_nccl (__main__.CommTest) ... ok
test_gloo_barrier_device_ids (__main__.CommTest) ... ok
test_nccl_barrier (__main__.CommTest) ... skipped 'Need at least 4 CUDA devices'
test_nccl_barrier_device_ids (__main__.CommTest) ... ok
test_nccl_barrier_device_ids_function_argument (__main__.CommTest) ... ok
test_nccl_barrier_timeout (__main__.CommTest) ... skipped 'Need at least 4 CUDA devices'
test_nccl_barrier_timeout_new_group (__main__.CommTest) ... skipped 'Need at least 4 CUDA devices'
test_nccl_barrier_timeout_new_group_non_member (__main__.CommTest) ... skipped 'Need at least 4 CUDA devices'
test_multi_limit_multi_dtype (__main__.ComputeBucketAssignmentTest) ... ok
test_multi_limit_single_dtype (__main__.ComputeBucketAssignmentTest) ... ok
test_single_limit_multi_dtype (__main__.ComputeBucketAssignmentTest) ... ok
test_single_limit_single_dtype (__main__.ComputeBucketAssignmentTest) ... ok
test_accumulate_gradients_module (__main__.DistributedDataParallelTest) ... ok
test_accumulate_gradients_module_with_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_accumulate_gradients_no_sync (__main__.DistributedDataParallelTest) ... ok
test_accumulate_gradients_no_sync_allreduce_hook (__main__.DistributedDataParallelTest) ... ok
test_accumulate_gradients_no_sync_allreduce_with_then_hook (__main__.DistributedDataParallelTest) ... ok
test_accumulate_gradients_no_sync_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_arbitrary_forward_return_value (__main__.DistributedDataParallelTest) ... ok
test_arbitrary_forward_return_value_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_builtin_ddp_comm_hooks_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_builtin_ddp_comm_hooks_nccl_grad_is_view (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_allreduce_hook_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_allreduce_hook_nccl_grad_is_view (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_allreduce_with_then_hook_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_future_passing_cpu (__main__.DistributedDataParallelTest) ... ok
test_ddp_comm_hook_future_passing_gpu_gloo (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_future_passing_gpu_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_ddp_comm_hook_register_just_once (__main__.DistributedDataParallelTest) ... ok
test_ddp_comm_hook_sparse_gradients (__main__.DistributedDataParallelTest) ... ok
test_ddp_invalid_comm_hook_init (__main__.DistributedDataParallelTest) ... ok
test_ddp_invalid_comm_hook_return_type (__main__.DistributedDataParallelTest) ... ok
test_ddp_multi_device_module_config (__main__.DistributedDataParallelTest) ... skipped 'Need at least 4 CUDA devices'
test_ddp_with_lazy_parameters (__main__.DistributedDataParallelTest) ... ok
test_default_ddp_comm_hooks_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_default_ddp_comm_hooks_nccl_is_view (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_failure_recovery (__main__.DistributedDataParallelTest) ... ok
test_find_unused_parameters_kwarg (__main__.DistributedDataParallelTest) ... ok
test_find_unused_parameters_kwarg_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_find_unused_parameters_when_unused_parameters_empty (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_fp16 (__main__.DistributedDataParallelTest) ... ok
test_fp16_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_global_local_unused_params_grad (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_global_local_unused_params_grad_with_grad_is_view (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_gloo_backend_1gpu_module_device_ids_integer_list (__main__.DistributedDataParallelTest) ... ok
test_gloo_backend_1gpu_module_device_ids_torch_device_list (__main__.DistributedDataParallelTest) ... ok
test_gloo_backend_2gpu_module (__main__.DistributedDataParallelTest) ... skipped 'Need at least 4 CUDA devices'
test_gloo_backend_4gpu_module (__main__.DistributedDataParallelTest) ... skipped 'Need at least 8 CUDA devices'
test_gloo_backend_cpu_module (__main__.DistributedDataParallelTest) ... ok
test_gloo_backend_cpu_module_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_grad_layout_1devicemodule_1replicaperprocess (__main__.DistributedDataParallelTest) ... ok
test_grad_layout_1devicemodule_2replicaperprocess (__main__.DistributedDataParallelTest) ... skipped 'Re-enable when DDP with multiple GPUs per process is confirmed to work'
test_grad_layout_2devicemodule (__main__.DistributedDataParallelTest) ... skipped 'Need at least 4 CUDA devices'
test_ignored_output (__main__.DistributedDataParallelTest) ... ok
test_ignored_output_with_unused_parameters (__main__.DistributedDataParallelTest) ... ok
test_invalid_powerSGD_state (__main__.DistributedDataParallelTest) ... ok
test_multiple_outputs_multiple_backward (__main__.DistributedDataParallelTest) ... ok
test_multiple_outputs_multiple_backward_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_nccl_backend_1gpu_module_device_ids_integer_list (__main__.DistributedDataParallelTest) ... ok
test_nccl_backend_1gpu_module_device_ids_torch_device_list (__main__.DistributedDataParallelTest) ... ok
test_nccl_backend_2gpu_module (__main__.DistributedDataParallelTest) ... skipped 'Need at least 4 CUDA devices'
test_nccl_backend_4gpu_module (__main__.DistributedDataParallelTest) ... skipped 'Need at least 8 CUDA devices'
test_no_grad (__main__.DistributedDataParallelTest) ... ok
test_param_layout_mismatch_error (__main__.DistributedDataParallelTest) ... ok
test_pass_default_pg (__main__.DistributedDataParallelTest) ... ok
test_powerSGD_ddp_comm_hook_nccl (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_powerSGD_ddp_comm_hook_nccl_grad_is_view (__main__.DistributedDataParallelTest) ... skipped 'Need at least 2 CUDA devices'
test_save_load_checkpoint (__main__.DistributedDataParallelTest) ... ok
test_sparse_gradients (__main__.DistributedDataParallelTest) ... ok
test_sparse_gradients_grad_is_view (__main__.DistributedDataParallelTest) ... ok
test_set_get (__main__.FileStoreTest) ... ok
test_invalid_nccl_blocking_wait_env (__main__.NcclErrorHandlingTest) ... skipped 'Need at least 3 CUDA devices'
test_nccl_blocking_wait_with_barrier (__main__.NcclErrorHandlingTest) ... skipped 'Need at least 3 CUDA devices'
test_nccl_errors_blocking_abort (__main__.NcclErrorHandlingTest) ... ok
test_nccl_errors_blocking_clean_exit (__main__.NcclErrorHandlingTest) ... skipped 'Need at least 3 CUDA devices'
test_nccl_errors_blocking_nonzero_exit (__main__.NcclErrorHandlingTest) ... ok
test_nccl_errors_blocking_sigkill (__main__.NcclErrorHandlingTest) ... ok
test_nccl_errors_blocking_sigterm (__main__.NcclErrorHandlingTest) ... ok
test_nccl_errors_nonblocking (__main__.NcclErrorHandlingTest) ... skipped 'Need at least 3 CUDA devices'
test_nccl_timeout (__main__.NcclErrorHandlingTest) ... skipped 'Need at least 3 CUDA devices'
test_set_get (__main__.PrefixFileStoreTest) ... ok
test_set_get (__main__.PrefixTCPStoreTest) ... ok
test_allgather_basics (__main__.ProcessGroupGlooTest) ... ok
test_allgather_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_allgather_checks (__main__.ProcessGroupGlooTest) ... ok
test_allgather_coalesced_checks (__main__.ProcessGroupGlooTest) ... ok
test_allgather_stress (__main__.ProcessGroupGlooTest) ... ok
test_allgather_stress_cuda (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_basics (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_checks (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_coalesced_basics (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_coalesced_checks (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_coalesced_checks_cuda (__main__.ProcessGroupGlooTest) ... ERROR:root:Caught exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 285, in wrapper
fn()
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 99, in wrapper
return func(*args, **kwargs)
File "distributed/test_c10d.py", line 959, in test_allreduce_coalesced_checks_cuda
pg.allreduce_coalesced([t1.cuda(), t1.cuda()], opts)
RuntimeError: HIP out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.98 GiB total capacity; 0 bytes already allocated; 15.98 GiB free; 0 bytes reserved in total by PyTorch)
exiting process with exit code: 10
ERROR:root:Caught exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 285, in wrapper
fn()
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 99, in wrapper
return func(*args, **kwargs)
File "distributed/test_c10d.py", line 959, in test_allreduce_coalesced_checks_cuda
pg.allreduce_coalesced([t1.cuda(), t1.cuda()], opts)
RuntimeError: HIP out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.98 GiB total capacity; 0 bytes already allocated; 15.98 GiB free; 0 bytes reserved in total by PyTorch)
exiting process with exit code: 10
ERROR:root:Caught exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 285, in wrapper
fn()
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 99, in wrapper
return func(*args, **kwargs)
File "distributed/test_c10d.py", line 959, in test_allreduce_coalesced_checks_cuda
pg.allreduce_coalesced([t1.cuda(), t1.cuda()], opts)
RuntimeError: HIP out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.98 GiB total capacity; 0 bytes already allocated; 15.98 GiB free; 0 bytes reserved in total by PyTorch)
exiting process with exit code: 10
ERROR:root:Caught exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 285, in wrapper
fn()
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 99, in wrapper
return func(*args, **kwargs)
File "distributed/test_c10d.py", line 959, in test_allreduce_coalesced_checks_cuda
pg.allreduce_coalesced([t1.cuda(), t1.cuda()], opts)
RuntimeError: HIP out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.98 GiB total capacity; 0 bytes already allocated; 15.98 GiB free; 0 bytes reserved in total by PyTorch)
exiting process with exit code: 10
ERROR
test_allreduce_coalesced_stress (__main__.ProcessGroupGlooTest) ... Process 0 terminated with exit code 10, terminating remaining processes.
ok
test_allreduce_stress (__main__.ProcessGroupGlooTest) ... ok
test_allreduce_stress_cuda (__main__.ProcessGroupGlooTest) ... ok
test_barrier_implies_wait (__main__.ProcessGroupGlooTest) ... ok
test_broadcast_basics (__main__.ProcessGroupGlooTest) ... ok
test_broadcast_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_broadcast_checks (__main__.ProcessGroupGlooTest) ... ok
test_broadcast_stress (__main__.ProcessGroupGlooTest) ... ok
test_broadcast_stress_cuda (__main__.ProcessGroupGlooTest) ... ok
test_empty_tensors (__main__.ProcessGroupGlooTest) ... ok
test_gather_basics (__main__.ProcessGroupGlooTest) ... ok
test_gather_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_gather_checks (__main__.ProcessGroupGlooTest) ... ok
test_gather_stress (__main__.ProcessGroupGlooTest) ... ok
test_gather_stress_cuda (__main__.ProcessGroupGlooTest) ... ok
test_multi_device_constructor (__main__.ProcessGroupGlooTest) ... ok
test_reduce_basics (__main__.ProcessGroupGlooTest) ... ok
test_reduce_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_reduce_checks (__main__.ProcessGroupGlooTest) ... ok
test_reduce_stress (__main__.ProcessGroupGlooTest) ... ok
test_reduce_stress_cuda (__main__.ProcessGroupGlooTest) ... ok
test_round_robin (__main__.ProcessGroupGlooTest) ... ok
test_round_robin_create_destroy (__main__.ProcessGroupGlooTest) ... ok
test_scatter_basics (__main__.ProcessGroupGlooTest) ... ok
test_scatter_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_scatter_checks (__main__.ProcessGroupGlooTest) ... ok
test_scatter_stress (__main__.ProcessGroupGlooTest) ... ok
test_scatter_stress_cuda (__main__.ProcessGroupGlooTest) ... skipped 'Test is flaky, see https://github.com/pytorch/pytorch/issues/15963'
test_send_recv_all_to_all (__main__.ProcessGroupGlooTest) ... ok
test_sparse_allreduce_basics (__main__.ProcessGroupGlooTest) ... ok
test_sparse_allreduce_basics_cuda (__main__.ProcessGroupGlooTest) ... ok
test_sparse_allreduce_checks (__main__.ProcessGroupGlooTest) ... ok
test_init_no_gpus (__main__.ProcessGroupNCCLNoGPUTest) ... skipped 'GPUs are available, skipping test'
test_allgather_ops (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_allreduce_ops (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_barrier (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_broadcast_ops (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_empty_tensors (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_reduce_ops (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_reduce_scatter_ops (__main__.ProcessGroupNCCLTest) ... skipped 'NCCL test requires 2+ GPUs'
test_set_get (__main__.PythonStoreTest) ... ok
test_ddp_comm_hook_multiple_replica_check (__main__.ReducerTest) ... ok
test_forward_backward_multi_replica (__main__.ReducerTest) ... ok
test_forward_backward_optimizer (__main__.ReducerTest) ... [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
ok
test_forward_backward_single_replica (__main__.ReducerTest) ... ok
test_forward_backward_unused_parameters (__main__.ReducerTest) ... ok
test_multi_dtype_multi_bucket (__main__.ReducerTest) ... ok
test_multi_dtype_single_bucket (__main__.ReducerTest) ... ok
test_single_dtype_single_bucket (__main__.ReducerTest) ... ok
test_common_errors (__main__.RendezvousEnvTest) ... ok
test_nominal (__main__.RendezvousEnvTest) ... ok
test_common_errors (__main__.RendezvousFileTest) ... ok
test_nominal (__main__.RendezvousFileTest) ... ok
test_common_errors (__main__.RendezvousTCPTest) ... ok
test_nominal (__main__.RendezvousTCPTest) ... ok
test_tcp_store_timeout_set (__main__.RendezvousTCPTest) ... ok
test_unknown_handler (__main__.RendezvousTest) ... ok
test_address_already_in_use (__main__.TCPStoreTest) ... ok
test_numkeys_delkeys (__main__.TCPStoreTest) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_set_get (__main__.TCPStoreTest) ... ok
test_default_store_timeout_gloo (__main__.TimeoutTest) ... ok
test_default_store_timeout_nccl (__main__.TimeoutTest) ... ok
======================================================================
ERROR: test_allreduce_coalesced_checks_cuda (__main__.ProcessGroupGlooTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 282, in wrapper
self._join_processes(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 399, in _join_processes
self._check_return_codes(elapsed_time)
File "/usr/local/lib/python3.6/dist-packages/torch/testing/_internal/common_distributed.py", line 435, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Processes 0 exited with error code 10
----------------------------------------------------------------------
Ran 158 tests in 33.137s
FAILED (errors=1, skipped=40)
distributed/test_c10d failed!
Running distributed/test_jit_c10d ... [2021-04-23 13:25:36.616533]
Executing ['/usr/bin/python3', 'distributed/test_jit_c10d.py', '-v'] ... [2021-04-23 13:25:36.616570]
test_frontend_singleton (__main__.C10dFrontendJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_process_group_as_module_member (__main__.C10dProcessGroupSerialization) ... skipped 'NCCL test requires 2+ GPUs'
test_init_process_group_nccl_as_base_process_group_torchbind (__main__.ProcessGroupNCCLJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_init_process_group_nccl_torchbind (__main__.ProcessGroupNCCLJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_process_group_nccl_as_base_process_group_torchbind_alltoall (__main__.ProcessGroupNCCLJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_process_group_nccl_serialization (__main__.ProcessGroupNCCLJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_process_group_nccl_torchbind_alltoall (__main__.ProcessGroupNCCLJitTest) ... skipped 'NCCL test requires 2+ GPUs'
test_create_file_store (__main__.StoreTest) ... ok
test_create_prefix_store (__main__.StoreTest) ... ok
----------------------------------------------------------------------
Ran 9 tests in 0.109s
OK (skipped=7)
Fail to import hypothesis in common_utils, tests are not derandomized
Running distributed/test_c10d_spawn ... [2021-04-23 13:25:37.563012]
Executing ['/usr/bin/python3', 'distributed/test_c10d_spawn.py', '-v'] ... [2021-04-23 13:25:37.563063]
test_cpu (__main__.DistributedDataParallelSingleProcessTest) ... ok
test_cuda (__main__.DistributedDataParallelSingleProcessTest) ... ok
test_rnn (__main__.DistributedDataParallelSingleProcessTest) ... skipped "test doesn't currently work on the ROCm stack"
test_shared_allgather_chunk_gloo (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_allgather_gloo (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_allgather_nccl (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_allreduce_gloo (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_allreduce_nccl (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_broadcast_gloo (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_broadcast_nccl (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_shared_reduce_nccl (__main__.ProcessGroupShareTensorTest) ... skipped 'At least 2 CUDA GPUS needed'
test_all_gather (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_all_to_all (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_allreduce (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_broadcast (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_gather (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_reduce (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_scatter (__main__.TestDistributedNNFunctions) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
----------------------------------------------------------------------
Ran 18 tests in 10.304s
OK (skipped=9)
Running test_cuda ... [2021-04-23 13:25:49.241773]
Executing ['/usr/bin/python3', 'test_cuda.py', '-v'] ... [2021-04-23 13:25:49.241819]
test_arithmetic_large_tensor (__main__.TestCuda) ... skipped 'was disabled due to not enough memory, but actually it always fail'
test_autocast_banned (__main__.TestCuda) ... ok
test_autocast_cache_leak (__main__.TestCuda) ... ok
test_autocast_cat_jit (__main__.TestCuda) ... ok
test_autocast_checkpointing (__main__.TestCuda) ... ok
test_autocast_custom_cast_inputs (__main__.TestCuda) ... ok
test_autocast_custom_enabled (__main__.TestCuda) ... ok
test_autocast_ignored_types (__main__.TestCuda) ... ok
test_autocast_methods_expect_builtin_promote (__main__.TestCuda) ... ok
test_autocast_methods_fp16 (__main__.TestCuda) ... [W Module.cpp:482] Warning: Disabling benchmark mode for MIOpen is NOT supported. Overriding value to True (function operator())
ok
test_autocast_methods_fp32 (__main__.TestCuda) ... ok
test_autocast_nn_fp16 (__main__.TestCuda) ... ok
test_autocast_nn_fp32 (__main__.TestCuda) ... ok
test_autocast_rnn (__main__.TestCuda) ... skipped "test doesn't currently work on the ROCm stack"
test_autocast_torch_expect_builtin_promote (__main__.TestCuda) ... ok
test_autocast_torch_fp16 (__main__.TestCuda) ... ok
test_autocast_torch_fp32 (__main__.TestCuda) ... ok
test_autocast_torch_need_autocast_promote (__main__.TestCuda) ... ok
test_autogpu (__main__.TestCuda) ... skipped 'only one GPU detected'
test_batch_norm_gather_stats (__main__.TestCuda) ... ok
test_bincount_ext (__main__.TestCuda) ... ok
test_caching_allocator_record_stream_oom (__main__.TestCuda)
allocations delayed by a record_stream call should still be freed on ... ok
test_caching_pinned_memory (__main__.TestCuda) ... ok
test_caching_pinned_memory_multi_gpu (__main__.TestCuda) ... skipped 'only one GPU detected'
test_cat_autogpu (__main__.TestCuda) ... skipped 'only one GPU detected'
test_check_error (__main__.TestCuda) ... ok
test_copy_device (__main__.TestCuda) ... skipped 'only one GPU detected'
test_copy_non_blocking (__main__.TestCuda) ... ok
test_copy_streams (__main__.TestCuda) ... skipped 'only one GPU detected'
test_cublas_allow_tf32_get_set (__main__.TestCuda) ... ok
test_cublas_multiple_threads_same_device (__main__.TestCuda) ... ok
test_cuda_device_memory_allocated (__main__.TestCuda) ... skipped 'Test needs multiple GPUs'
test_cuda_get_device_capability (__main__.TestCuda) ... ok
test_cuda_get_device_name (__main__.TestCuda) ... ok
test_cuda_init_race (__main__.TestCuda) ... skipped 'only one GPU detected'
test_cuda_kernel_loop_overflow (__main__.TestCuda) ... ok
test_cuda_kernel_loop_overflow_large (__main__.TestCuda) ... ok
test_cuda_memory_leak_detection (__main__.TestCuda) ... ok
test_cuda_memory_leak_detection_propagates_errors (__main__.TestCuda) ... ok
test_cuda_set_device (__main__.TestCuda) ... skipped 'detected only one GPU'
test_cuda_synchronize (__main__.TestCuda) ... ok
test_cudart_register (__main__.TestCuda) ... skipped "test doesn't currently work on the ROCm stack"
test_cudnn_allow_tf32_get_set (__main__.TestCuda) ... ok
test_cudnn_multiple_threads_same_device (__main__.TestCuda) ... skipped "test doesn't currently work on the ROCm stack"
test_current_stream (__main__.TestCuda) ... skipped 'detected only one GPU'
test_cusparse_multiple_threads_same_device (__main__.TestCuda) ... ok
test_default_stream (__main__.TestCuda) ... skipped 'detected only one GPU'
test_events (__main__.TestCuda) ... ok
test_events_multi_gpu_elapsed_time (__main__.TestCuda) ... skipped 'detected only one GPU'
test_events_multi_gpu_query (__main__.TestCuda) ... skipped 'detected only one GPU'
test_events_wait (__main__.TestCuda) ... skipped 'detected only one GPU'
test_gather_bool (__main__.TestCuda) ... ok
test_get_device_index (__main__.TestCuda) ... ok
test_get_set_rng_state_all (__main__.TestCuda) ... skipped 'only one GPU detected'
test_grad_scaling_accumulation (__main__.TestCuda) ... ok
test_grad_scaling_autocast (__main__.TestCuda) ... ok
test_grad_scaling_clipping (__main__.TestCuda) ... ok
test_grad_scaling_clipping_separate_unscale (__main__.TestCuda) ... ok
test_grad_scaling_device_as_key (__main__.TestCuda) ... skipped 'only one GPU detected'
test_grad_scaling_multigpu (__main__.TestCuda) ... skipped 'only one GPU detected'
test_grad_scaling_multiple (__main__.TestCuda) ... ok
test_grad_scaling_penalty (__main__.TestCuda) ... ok
test_grad_scaling_scale (__main__.TestCuda) ... skipped 'only one GPU detected'
test_grad_scaling_state_dict (__main__.TestCuda) ... ok
test_grad_scaling_unscale (__main__.TestCuda) ... ok
test_grad_scaling_unscale_sparse (__main__.TestCuda) ... ok
test_grad_scaling_update_scale (__main__.TestCuda) ... ok
test_graph_capture_simple (__main__.TestCuda) ... skipped 'CUDA >= 11.0 required for graphs'
test_graph_rng_distributions (__main__.TestCuda) ... skipped 'CUDA >= 11.0 required for graphs'
test_graph_rng_functional (__main__.TestCuda) ... skipped 'CUDA >= 11.0 required for graphs'
test_huge_index (__main__.TestCuda) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_large_trilu_indices (__main__.TestCuda) ... ok
test_load_nonexistent_device (__main__.TestCuda) ... ok
test_manual_seed (__main__.TestCuda) ... ok
test_max_large_axis (__main__.TestCuda) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_mean_fp16 (__main__.TestCuda) ... ok
test_memory_allocation (__main__.TestCuda) ... ok
test_memory_stats (__main__.TestCuda) ... ok
test_memory_stats_multigpu (__main__.TestCuda) ... skipped 'only one GPU detected'
test_min_max_inits (__main__.TestCuda) ... ok
test_multigpu_serialization_remap (__main__.TestCuda) ... skipped 'detected only one GPU'
test_multigpu_serialization_remap_dict (__main__.TestCuda) ... skipped 'detected only one GPU'
test_multigpu_storage_clone (__main__.TestCuda) ... skipped 'detected only one GPU'
test_multinomial_ext (__main__.TestCuda) ... ok
test_multinomial_invalid_probs_cuda (__main__.TestCuda) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_new (__main__.TestCuda) ... skipped 'only one GPU detected'
test_noncontiguous_pinned_memory (__main__.TestCuda) ... ok
test_norm_type_conversion (__main__.TestCuda) ... ok
test_nvtx (__main__.TestCuda) ... skipped "test doesn't currently work on the ROCm stack"
test_out_of_memory (__main__.TestCuda) ... ok
test_prod_large (__main__.TestCuda) ... ok
test_record_stream (__main__.TestCuda) ... ok
test_record_stream_on_shifted_view (__main__.TestCuda) ... ok
test_reduction_gpu_memory_accessing (__main__.TestCuda) ... ok
test_scatter_add_mult_index_base (__main__.TestCuda) ... ok
test_serialization_array_with_empty (__main__.TestCuda) ... ok
test_serialization_array_with_storage (__main__.TestCuda) ... ok
test_set_per_process_memory_fraction (__main__.TestCuda) ... ok
test_specify_improper_device_name (__main__.TestCuda) ... ok
test_stream_context (__main__.TestCuda) ... skipped 'detected only one GPU'
test_stream_event_device (__main__.TestCuda) ... skipped 'detected only one GPU'
test_stream_event_nogil (__main__.TestCuda) ... skipped 'detected only one GPU'
test_stream_event_repr (__main__.TestCuda) ... ok
test_streaming_backward_sync_graph_root (__main__.TestCuda) ... ok
test_streaming_backwards_device_transfer (__main__.TestCuda) ... skipped 'only one GPU detected'
test_streaming_backwards_multiple_streams (__main__.TestCuda) ... ok
test_streaming_backwards_sync (__main__.TestCuda) ... ok
test_streams (__main__.TestCuda) ... skipped "test doesn't currently work on the ROCm stack"
test_streams_multi_gpu (__main__.TestCuda) ... skipped 'detected only one GPU'
test_streams_multi_gpu_eq (__main__.TestCuda) ... skipped 'detected only one GPU'
test_streams_multi_gpu_query (__main__.TestCuda) ... skipped 'detected only one GPU'
test_streams_priority (__main__.TestCuda) ... skipped 'multi-GPU not supported'
test_sum_fp16 (__main__.TestCuda) ... ok
test_tensor_device (__main__.TestCuda) ... skipped 'multi-GPU not supported'
test_tensor_gather (__main__.TestCuda) ... /home/luke/Projects/Neural/pytorch/test/test_torch.py:1051: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at /pytorch/aten/src/ATen/native/Copy.cpp:219.)
torch.gather(src, dim, idx, out=expected.to(torch.int))
ok
test_tensor_scatter (__main__.TestCuda) ... ok
test_tensor_scatterAdd (__main__.TestCuda) ... ok
test_tensor_scatterAdd_complex (__main__.TestCuda) ... ok
test_tensor_scatterFill (__main__.TestCuda) ... ok
test_tensor_scatterFill_complex (__main__.TestCuda) ... ok
test_tensor_scatter_complex (__main__.TestCuda) ... ok
test_tiny_half_norm_ (__main__.TestCuda) ... ok
test_to_cpu_blocking_by_default (__main__.TestCuda) ... ok
test_to_non_blocking (__main__.TestCuda) ... ok
test_to_numpy (__main__.TestCuda) ... ok
test_torch_manual_seed_seeds_cuda_devices (__main__.TestCuda) ... ok
test_trilu_indices (__main__.TestCuda) ... ok
test_type_conversions (__main__.TestCuda) ... ok
test_broadcast_coalesced (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_broadcast_coalesced_dense_only (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_broadcast_coalesced_empty_tensors (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_broadcast_cpu (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_broadcast_gpu (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_gather (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_gather_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_gather_neg_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_matmul_device_mismatch (__main__.TestCudaComm) ... ok
test_memory_format_scatter_gather (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_reduce_add (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_reduce_add_coalesced (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_reduce_add_coalesced_dense_only (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_cpu (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_cpu_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_cpu_neg_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_cpu_sizes (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_gpu (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_gpu_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_gpu_neg_dim (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_gpu_sizes (__main__.TestCudaComm) ... skipped 'only one GPU detected'
test_scatter_namedtuple (__main__.TestCudaComm) ... skipped 'Test needs multiple GPUs'
----------------------------------------------------------------------
Ran 150 tests in 40.078s
OK (skipped=64)
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_jit_cuda_fuser ... [2021-04-23 13:26:30.740580]
Executing ['/usr/bin/python3', 'test_jit_cuda_fuser.py', '-v'] ... [2021-04-23 13:26:30.740626]
test_addcmul_ops (__main__.TestCudaFuser) ... ok
test_binary_ops (__main__.TestCudaFuser) ... ok
test_binary_ops_permutation (__main__.TestCudaFuser) ... ok
test_broadcasting_0 (__main__.TestCudaFuser) ... ok
test_broadcasting_1 (__main__.TestCudaFuser) ... ok
test_broadcasting_2 (__main__.TestCudaFuser) ... ok
test_broadcasting_3 (__main__.TestCudaFuser) ... ok
test_broadcasting_multiple_output (__main__.TestCudaFuser) ... skipped "broadcast on branches can't be resolved yet"
test_broadcasting_multiple_output_shape (__main__.TestCudaFuser) ... skipped 'Broadcast with different output not supported yet'
test_broadcasting_partition_logic_0 (__main__.TestCudaFuser) ... ok
test_broadcasting_partition_logic_1 (__main__.TestCudaFuser) ... ok
test_chunk (__main__.TestCudaFuser) ... ok
test_const (__main__.TestCudaFuser) ... ok
test_dynamic_size (__main__.TestCudaFuser) ... ok
test_half (__main__.TestCudaFuser) ... ok
test_profiling_node (__main__.TestCudaFuser) ... ok
test_pw_single_reduction_partition (__main__.TestCudaFuser) ... ok
test_random_topo (__main__.TestCudaFuser) ... ok
test_reduction (__main__.TestCudaFuser) ... skipped "test doesn't currently work on the ROCm stack"
test_reduction_dtype (__main__.TestCudaFuser) ... ok
test_reduction_half (__main__.TestCudaFuser) ... ok
test_reduction_multiple_output (__main__.TestCudaFuser) ... ok
test_reduction_permutation (__main__.TestCudaFuser) ... ok
test_reduction_sizes_op (__main__.TestCudaFuser) ... ok
test_scalar_input (__main__.TestCudaFuser) ... ok
test_single_reduction_broadcast (__main__.TestCudaFuser) ... ok
test_ternary_ops (__main__.TestCudaFuser) ... ok
test_unary_ops (__main__.TestCudaFuser) ... ok
test_autodiff_fallback (jit.test_fuser_common.TestFuserCommon) ... ok
test_context_manager_test (__main__.TestPassManagerCudaFuser) ... ok
test_register_fuser (__main__.TestPassManagerCudaFuser) ... ok
----------------------------------------------------------------------
Ran 31 tests in 205.755s
OK (skipped=3)
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_cuda_primary_ctx ... [2021-04-23 13:29:57.640671]
Executing ['/usr/bin/python3', 'test_cuda_primary_ctx.py', '-v', '--subprocess'] ... [2021-04-23 13:29:57.640721]
Fail to import hypothesis in common_utils, tests are not derandomized
test_copy (__main__.TestCudaPrimaryCtx) ... skipped 'only one GPU detected'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
Fail to import hypothesis in common_utils, tests are not derandomized
test_pin_memory (__main__.TestCudaPrimaryCtx) ... skipped 'only one GPU detected'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
Fail to import hypothesis in common_utils, tests are not derandomized
test_str_repr (__main__.TestCudaPrimaryCtx) ... skipped 'only one GPU detected'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_dataloader ... [2021-04-23 13:30:00.031568]
Executing ['/usr/bin/python3', 'test_dataloader.py', '-v'] ... [2021-04-23 13:30:00.031616]
test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
test_add_dataset (__main__.TestConcatDataset) ... ok
test_concat_raises_index_error (__main__.TestConcatDataset) ... ok
test_concat_two_non_singletons (__main__.TestConcatDataset) ... ok
test_concat_two_non_singletons_with_empty (__main__.TestConcatDataset) ... ok
test_concat_two_singletons (__main__.TestConcatDataset) ... ok
test_iterable_dataset_err (__main__.TestConcatDataset) ... ok
test_conv_after_fork (__main__.TestConvAfterFork) ... Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_custom_batch_pin (__main__.TestCustomPinFn) ... skipped "test doesn't currently work on the ROCm stack"
test_custom_batch_pin_worker (__main__.TestCustomPinFn) ... skipped "test doesn't currently work on the ROCm stack"
test_batch_sampler (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_buffer_shuffle_dataset (__main__.TestDataLoader) ... ok
test_builtin_collection_conversion (__main__.TestDataLoader) ... ok
test_bulk_loading_nobatch (__main__.TestDataLoader) ... ok
test_chain_iterable_style_dataset (__main__.TestDataLoader) ... ok
test_default_collate_bad_numpy_types (__main__.TestDataLoader) ... ok
test_default_collate_bad_sequence_type (__main__.TestDataLoader) ... ok
test_default_collate_dtype (__main__.TestDataLoader) ... ok
test_default_collate_numpy_memmap (__main__.TestDataLoader) ... /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py:63: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:143.)
return default_collate([torch.as_tensor(b) for b in batch])
ok
test_default_collate_shared_tensor (__main__.TestDataLoader) ... ok
test_distributed_sampler_invalid_rank (__main__.TestDataLoader) ... ok
test_duplicating_data_with_drop_last (__main__.TestDataLoader) ... ok
test_error (__main__.TestDataLoader) ... ok
test_error_in_init (__main__.TestDataLoader) ... ok
test_error_workers (__main__.TestDataLoader) ... ok
test_excessive_thread_creation_warning (__main__.TestDataLoader) ... ok
test_fd_limit_exceeded (__main__.TestDataLoader) ... ok
test_get_worker_info (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_growing_dataset (__main__.TestDataLoader) ... ok
test_invalid_assign_after_init (__main__.TestDataLoader) ... ok
test_invalid_ctor_args_combinations (__main__.TestDataLoader) ... ok
test_iterable_style_dataset (__main__.TestDataLoader) ... ok
test_iterabledataset_len (__main__.TestDataLoader) ... ok
test_large_sampler_indices (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_len (__main__.TestDataLoader) ... ok
test_multiple_dataloaders (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_multiprocessing_contexts (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_numpy (__main__.TestDataLoader) ... ok
test_numpy_scalars (__main__.TestDataLoader) ... ok
test_partial_workers (__main__.TestDataLoader)
Check that workers exit even if the iterator is not exhausted. ... ok
test_proper_exit (__main__.TestDataLoader)
There might be ConnectionResetError or leaked semaphore warning (due to dirty process exit), but they are all safe to ignore ... skipped 'psutil not found'
test_random_sampler (__main__.TestDataLoader) ... ok
test_random_sampler_len_with_replacement (__main__.TestDataLoader) ... ok
test_sampler (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_sampler_reproducibility (__main__.TestDataLoader) ... ok
test_segfault (__main__.TestDataLoader) ... skipped 'temporarily disable until flaky failures are fixed'
test_seqential_batch_workers (__main__.TestDataLoader) ... ok
test_seqential_batch_workers_prefetch (__main__.TestDataLoader) ... ok
test_sequential_batch (__main__.TestDataLoader) ... ok
test_sequential_nonbatch (__main__.TestDataLoader) ... ok
test_sequential_pin_memory (__main__.TestDataLoader) ... ok
test_sequential_workers (__main__.TestDataLoader) ... ok
test_shuffle (__main__.TestDataLoader) ... ok
test_shuffle_batch (__main__.TestDataLoader) ... ok
test_shuffle_batch_none (__main__.TestDataLoader) ... ok
test_shuffle_batch_workers (__main__.TestDataLoader) ... ok
test_shuffle_batch_workers_prefetch (__main__.TestDataLoader) ... ok
test_shuffle_pin_memory (__main__.TestDataLoader) ... ok
test_shuffle_reproducibility (__main__.TestDataLoader) ... ok
test_shuffle_workers (__main__.TestDataLoader) ... ok
test_timeout (__main__.TestDataLoader) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_typing (__main__.TestDataLoader) ... ok
test_worker_init_fn (__main__.TestDataLoader) ... ok
test_worker_seed (__main__.TestDataLoader) ... ok
test_worker_seed_reproducibility (__main__.TestDataLoader) ... ok
test_batch_sampler (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_buffer_shuffle_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok
test_builtin_collection_conversion (__main__.TestDataLoaderPersistentWorkers) ... ok
test_bulk_loading_nobatch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_chain_iterable_style_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok
test_dataset_not_reset (__main__.TestDataLoaderPersistentWorkers) ... ok
test_default_collate_bad_numpy_types (__main__.TestDataLoaderPersistentWorkers) ... ok
test_default_collate_bad_sequence_type (__main__.TestDataLoaderPersistentWorkers) ... ok
test_default_collate_dtype (__main__.TestDataLoaderPersistentWorkers) ... ok
test_default_collate_numpy_memmap (__main__.TestDataLoaderPersistentWorkers) ... ok
test_default_collate_shared_tensor (__main__.TestDataLoaderPersistentWorkers) ... ok
test_distributed_sampler_invalid_rank (__main__.TestDataLoaderPersistentWorkers) ... ok
test_duplicating_data_with_drop_last (__main__.TestDataLoaderPersistentWorkers) ... ok
test_error (__main__.TestDataLoaderPersistentWorkers) ... ok
test_error_in_init (__main__.TestDataLoaderPersistentWorkers) ... ok
test_error_workers (__main__.TestDataLoaderPersistentWorkers) ... ok
test_excessive_thread_creation_warning (__main__.TestDataLoaderPersistentWorkers) ... ok
test_fd_limit_exceeded (__main__.TestDataLoaderPersistentWorkers) ... ok
test_get_worker_info (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_growing_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok
test_invalid_assign_after_init (__main__.TestDataLoaderPersistentWorkers) ... ok
test_invalid_ctor_args_combinations (__main__.TestDataLoaderPersistentWorkers) ... ok
test_iterable_style_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok
test_iterabledataset_len (__main__.TestDataLoaderPersistentWorkers) ... ok
test_large_sampler_indices (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_len (__main__.TestDataLoaderPersistentWorkers) ... ok
test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_numpy (__main__.TestDataLoaderPersistentWorkers) ... ok
test_numpy_scalars (__main__.TestDataLoaderPersistentWorkers) ... ok
test_partial_workers (__main__.TestDataLoaderPersistentWorkers)
Check that workers exit even if the iterator is not exhausted. ... ok
test_proper_exit (__main__.TestDataLoaderPersistentWorkers)
There might be ConnectionResetError or leaked semaphore warning (due to dirty process exit), but they are all safe to ignore ... skipped 'psutil not found'
test_random_sampler (__main__.TestDataLoaderPersistentWorkers) ... ok
test_random_sampler_len_with_replacement (__main__.TestDataLoaderPersistentWorkers) ... ok
test_sampler (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_sampler_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok
test_segfault (__main__.TestDataLoaderPersistentWorkers) ... skipped 'temporarily disable until flaky failures are fixed'
test_seqential_batch_workers (__main__.TestDataLoaderPersistentWorkers) ... ok
test_seqential_batch_workers_prefetch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_sequential_batch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_sequential_nonbatch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_sequential_pin_memory (__main__.TestDataLoaderPersistentWorkers) ... ok
test_sequential_workers (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_batch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_batch_none (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_batch_workers (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_batch_workers_prefetch (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_pin_memory (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok
test_shuffle_workers (__main__.TestDataLoaderPersistentWorkers) ... ok
test_timeout (__main__.TestDataLoaderPersistentWorkers) ... /home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
/home/luke/Projects/Neural/pytorch/test/test_dataloader.py:36: UserWarning: psutil not found. Some critical data loader tests relying on it (e.g., TestDataLoader.test_proper_exit) will not run.
warnings.warn(err_msg)
Fail to import hypothesis in common_utils, tests are not derandomized
ok
test_typing (__main__.TestDataLoaderPersistentWorkers) ... ok
test_worker_init_fn (__main__.TestDataLoaderPersistentWorkers) ... ok
test_worker_seed (__main__.TestDataLoaderPersistentWorkers) ... ok
test_worker_seed_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok
test_lengths_must_equal_dataset_size (__main__.TestDatasetRandomSplit) ... ok
test_splits_are_mutually_exclusive (__main__.TestDatasetRandomSplit) ... ok
test_splits_generator (__main__.TestDatasetRandomSplit) ... ok
test_splits_have_correct_size (__main__.TestDatasetRandomSplit) ... ok
test_splits_indexing_type (__main__.TestDatasetRandomSplit)
Indices generated by random_split ... ok
test_splits_reproducibility (__main__.TestDatasetRandomSplit) ... ok
test_pin_memory (__main__.TestDictDataLoader) ... ok
test_sequential_batch (__main__.TestDictDataLoader) ... ok
test_ind_worker_queue (__main__.TestIndividualWorkerQueue) ... ok
test_dataloader_with_namedtuple (__main__.TestNamedTupleDataLoader) ... ok
test_set_affinity_in_worker_init (__main__.TestSetAffinity) ... ok
test_shuffle_pin_memory (__main__.TestStringDataLoader) ... ok
test_getitem (__main__.TestTensorDataset) ... ok
test_getitem_1d (__main__.TestTensorDataset) ... ok
test_len (__main__.TestTensorDataset) ... ok
test_many_tensors (__main__.TestTensorDataset) ... ok
test_single_tensor (__main__.TestTensorDataset) ... ok
----------------------------------------------------------------------
Ran 137 tests in 26.744s
OK (skipped=6)
Running test_dataset ... [2021-04-23 13:30:27.404466]
Executing ['/usr/bin/python3', 'test_dataset.py', '-v'] ... [2021-04-23 13:30:27.404506]
test_listdirfiles_iterable_dataset (__main__.TestIterableDatasetBasic) ... ok
test_loadfilesfromdisk_iterable_dataset (__main__.TestIterableDatasetBasic) ... test_dataset.py:46: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpsfhfqset'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
test_dataset.py:44: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpsfhfqset'>
for rec in dataset2:
test_dataset.py:46: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpsy05hji7'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
test_dataset.py:44: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpsy05hji7'>
for rec in dataset2:
test_dataset.py:46: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpyw2ci91a'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
/usr/lib/python3.6/unittest/case.py:605: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpjs109p_s/tmpyw2ci91a'>
testMethod()
ok
----------------------------------------------------------------------
Ran 2 tests in 0.012s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running test_datapipe ... [2021-04-23 13:30:28.055356]
Executing ['/usr/bin/python3', 'test_datapipe.py', '-v'] ... [2021-04-23 13:30:28.055401]
test_batch_datapipe (__main__.TestFunctionalIterDataPipe) ... ok
test_bucket_batch_datapipe (__main__.TestFunctionalIterDataPipe) ... ok
test_callable_datapipe (__main__.TestFunctionalIterDataPipe) ... ok
test_collate_datapipe (__main__.TestFunctionalIterDataPipe) ... ok
test_picklable (__main__.TestFunctionalIterDataPipe) ... /usr/local/lib/python3.6/dist-packages/torch/utils/data/datapipes/iter/callable.py:35: UserWarning: Lambda function is not supported for pickle, please use regular python function instead.
warnings.warn("Lambda function is not supported for pickle, "
ok
test_sampler_datapipe (__main__.TestFunctionalIterDataPipe) ... ok
test_listdirfiles_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok
test_loadfilesfromdisk_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... test_datapipe.py:94: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmpfh4prhgt.empty'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
test_datapipe.py:91: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmpfh4prhgt.empty'>
for rec in datapipe2:
test_datapipe.py:94: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmpvcxrdiky.byte'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
test_datapipe.py:91: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmpvcxrdiky.byte'>
for rec in datapipe2:
test_datapipe.py:94: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmp20p5r2f6.txt'>
self.assertTrue(rec[1].read() == open(rec[0], 'rb').read())
/usr/lib/python3.6/unittest/case.py:605: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpi8u8aadd/tmp20p5r2f6.txt'>
testMethod()
ok
test_readfilesfromtar_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... test_datapipe.py:113: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmpvb5quuv0.txt'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
test_datapipe.py:113: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmp8ezq0qr8.byte'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
test_datapipe.py:113: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmpx1mwhe23.empty'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
test_datapipe.py:118: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/test_tar.tar'>
for rec in datapipe3:
test_datapipe.py:124: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmpvb5quuv0.txt'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
test_datapipe.py:124: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmp8ezq0qr8.byte'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
test_datapipe.py:124: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/tmpx1mwhe23.empty'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
/usr/lib/python3.6/unittest/case.py:605: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpu083dqde/test_tar.tar'>
testMethod()
ok
test_readfilesfromzip_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... test_datapipe.py:142: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmpvh11kf6r.txt'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
test_datapipe.py:142: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmp73fxibk4.byte'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
test_datapipe.py:142: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmpojgtlyfr.empty'>
self.assertEqual(rec[1].read(), open(temp_file, 'rb').read())
/usr/lib/python3.6/zipfile.py:1686: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/test_zip.zip'>
self.close()
test_datapipe.py:153: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmpvh11kf6r.txt'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
test_datapipe.py:153: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmp73fxibk4.byte'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
test_datapipe.py:153: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmp7htmc3p3/tmpojgtlyfr.empty'>
self.assertEqual(data_refs[i][1].read(), open(self.temp_files[i], 'rb').read())
ok
----------------------------------------------------------------------
Ran 10 tests in 0.247s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running distributed/test_data_parallel ... [2021-04-23 13:30:28.963144]
Executing ['/usr/bin/python3', 'distributed/test_data_parallel.py', '-v'] ... [2021-04-23 13:30:28.963193]
test_autocast (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_buffers_requiring_grad (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_complex (__main__.TestDataParallel) ... skipped 'At least 2 CUDA GPUS needed'
test_data_parallel_device_args (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_function_deletion (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_lazy_linear (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_model_device (__main__.TestDataParallel)
Test device[0] check at forward time. ... skipped 'multi-GPU not supported'
test_data_parallel_model_no_refcycles (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_module (__main__.TestDataParallel) ... ok
test_data_parallel_module_kwargs_only (__main__.TestDataParallel) ... ok
test_data_parallel_module_kwargs_only_empty_dict (__main__.TestDataParallel) ... ok
test_data_parallel_module_kwargs_only_empty_list (__main__.TestDataParallel) ... ok
test_data_parallel_module_kwargs_only_empty_tuple (__main__.TestDataParallel) ... ok
test_data_parallel_module_zero_inputs (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_multiple_input (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_nested_input (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_nested_output (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_no_grad (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_rnn (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_small_back (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_data_parallel_sparse (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_gather_cpu (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_gather_different_len_dicts (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_gather_gpu (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_parallel_apply (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_parallel_apply_autocast (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_parallel_apply_passes_exception (__main__.TestDataParallel) ... ok
test_parameter_list_dict_replica (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_replicate (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_replicate_buffers (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_save_replica_module (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_scatter_cpu (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_scatter_gpu (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_strided_grad_layout (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
test_zero_grad (__main__.TestDataParallel) ... skipped 'multi-GPU not supported'
----------------------------------------------------------------------
Ran 36 tests in 3.186s
OK (skipped=30)
Fail to import hypothesis in common_utils, tests are not derandomized
Running distributed/test_distributed_fork ... [2021-04-23 13:30:33.380651]
MPI not available -- MPI backend tests will be skipped
Running distributed tests for the test backend with env init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:30:33.384755]
test_backend_apis (__main__.TestBackendDynamicLoad) ... Fail to import hypothesis in common_utils, tests are not derandomized
/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[W init.cpp:1105] Warning: ProcessGroup::Work::is_success API is being deprecated, please ping https://github.com/pytorch/pytorch/issues/46291 if you see this warning (function operator())
ok
----------------------------------------------------------------------
Ran 1 test in 7.341s
OK
Running distributed tests for the test backend with file init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:30:41.452937]
test_backend_apis (__main__.TestBackendDynamicLoad) ... Fail to import hypothesis in common_utils, tests are not derandomized
/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[W init.cpp:1105] Warning: ProcessGroup::Work::is_success API is being deprecated, please ping https://github.com/pytorch/pytorch/issues/46291 if you see this warning (function operator())
ok
----------------------------------------------------------------------
Ran 1 test in 7.205s
OK
Running distributed tests for the nccl backend with env init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:30:49.359251]
test_Backend_enum_class (__main__.TestDistBackendWithFork) ... Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallelCPU_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallel_SyncBatchNorm (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_2D_Input (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_non_default_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_requires_grad (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_with_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedSampler_padding (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_SyncBatchNorm_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_coalesced_complex (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_full_group (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_group (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_simple (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_with_empty (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_complex (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_cuda (__main__.TestDistBackendWithFork) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max_complex_unsupported (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_coalesced_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_complex_unsupported_ops (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_result_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_async (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_complex (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_allgather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_global (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_group (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_batch_isend_irecv_gloo (__main__.TestDistBackendWithFork) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_gloo_tags (__main__.TestDistBackendWithFork) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_mixed_backend_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_no_rank_zero_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_list_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_self_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_tensor_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_cuda (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/47645 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_broadcast_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_multigpu (__main__.TestDistBackendWithFork) ... skipped 'NCCL broadcast multigpu skipped'
test_broadcast_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_different_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_same_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_device (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_grad_div_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_powerSGD (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_ignore_params_arg (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_join_model_equivalence (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_cpu (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_ddp_logging_data_gpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_namedtuple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_shared_grad_acc_unused_params (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_sync_params_and_buffers (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_join_disable (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs_replicated_error (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_unused_params_rebuild_buckets_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_dump_DDP_relevant_env_vars (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_checks (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_object (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_get_backend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_irecv (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support irecv'
test_isend (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support isend'
test_nccl_backend_bool_allgather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_reduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_gather_object_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_high_priority_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum_cuda_twice (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/50840 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_reduce_sum_twice (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_scatter (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_checks (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_scatter_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_object_list (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_send_recv (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv'
test_send_recv_any_source (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv from any source'
test_send_recv_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv_with_tag (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv'
test_sparse_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Only Gloo backend support sparse all reduce'
test_sparse_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Gloo backend support sparse all reduce'
----------------------------------------------------------------------
Ran 172 tests in 8.811s
OK (skipped=172)
Running distributed tests for the nccl backend with file init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:30:58.840802]
test_Backend_enum_class (__main__.TestDistBackendWithFork) ... Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallelCPU_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallel_SyncBatchNorm (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_2D_Input (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_non_default_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_requires_grad (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_with_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedSampler_padding (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_SyncBatchNorm_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_coalesced_complex (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_full_group (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_group (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_simple (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_with_empty (__main__.TestDistBackendWithFork) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_complex (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_cuda (__main__.TestDistBackendWithFork) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max_complex_unsupported (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_coalesced_min (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_product (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_sum (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_complex_unsupported_ops (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_result_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_async (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_complex (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_allgather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group (__main__.TestDistBackendWithFork) ... skipped 'NCCL does not support CPU barrier'
test_barrier_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_global (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_group (__main__.TestDistBackendWithFork) ... skipped 'Only gloo backend supports timeouts'
test_batch_isend_irecv_gloo (__main__.TestDistBackendWithFork) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_gloo_tags (__main__.TestDistBackendWithFork) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_mixed_backend_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_no_rank_zero_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_list_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_self_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_tensor_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_cuda (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/47645 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_broadcast_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_multigpu (__main__.TestDistBackendWithFork) ... skipped 'NCCL broadcast multigpu skipped'
test_broadcast_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_different_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_same_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_device (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_grad_div_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_powerSGD (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_ignore_params_arg (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_join_model_equivalence (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_cpu (__main__.TestDistBackendWithFork) ... skipped 'nccl does not support DDP on CPU models'
test_ddp_logging_data_gpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_namedtuple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_shared_grad_acc_unused_params (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_sync_params_and_buffers (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_join_disable (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs_replicated_error (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_unused_params_rebuild_buckets_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_dump_DDP_relevant_env_vars (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_checks (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_gather_object (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_get_backend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_irecv (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support irecv'
test_isend (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support isend'
test_nccl_backend_bool_allgather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_reduce (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_gather_object_err (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_high_priority_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum_cuda_twice (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/50840 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_reduce_sum_twice (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_scatter (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_checks (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support CPU tensors'
test_scatter_full_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_group (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support scatter'
test_scatter_object_list (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'gloo'}"
test_send_recv (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv'
test_send_recv_any_source (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv from any source'
test_send_recv_nccl (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv_with_tag (__main__.TestDistBackendWithFork) ... skipped 'Nccl does not support send/recv'
test_sparse_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Only Gloo backend support sparse all reduce'
test_sparse_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Gloo backend support sparse all reduce'
----------------------------------------------------------------------
Ran 172 tests in 8.876s
OK (skipped=172)
Running distributed tests for the gloo backend with env init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:31:08.389089]
test_Backend_enum_class (__main__.TestDistBackendWithFork) ... Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_2D_Input (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_non_default_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_requires_grad (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_with_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedSampler_padding (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_SyncBatchNorm_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_simple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_with_empty (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all gather'
test_all_gather_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all gather'
test_all_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports allgather multigpu'
test_all_gather_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports allgather multigpu'
test_all_reduce_coalesced_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_max_complex_unsupported (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_complex_unsupported_ops (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_result_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL supports CUDA all_to_all'
test_all_to_all_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL supports CUDA all_to_all'
test_all_to_all_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_allgather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_global (__main__.TestDistBackendWithFork) ... skipped 'Requires file:// initialization method. Both tcp:// and env:// rely on the TCP store for which reinitialization has proven racy.'
test_barrier_timeout_group (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/50699 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_batch_isend_irecv_gloo (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_gloo_tags (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_mixed_backend_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_no_rank_zero_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_op_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_op_list_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_self_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_tensor_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_cuda (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/47645 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_broadcast_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_different_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_same_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_device (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_grad_div_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_hook_parity_allreduce_process_group (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_hook_parity_powerSGD (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_ignore_params_arg (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_join_model_equivalence (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_cpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_gpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_namedtuple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_shared_grad_acc_unused_params (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_sync_params_and_buffers (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_join_disable (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs_replicated_error (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_unused_params_rebuild_buckets_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_dump_DDP_relevant_env_vars (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_checks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_backend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_irecv (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_isend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_allgather (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_allreduce (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_broadcast (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_reduce (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_gather_object_err (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_high_priority_stream (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports high priority stream'
test_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports reduce multigpu'
test_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA reduce'
test_reduce_sum_cuda_twice (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA reduce'
test_reduce_sum_twice (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_checks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/52283 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_send_recv_any_source (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Send Recv Only'
test_send_recv_with_tag (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/52284 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_sparse_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_sparse_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
----------------------------------------------------------------------
Ran 172 tests in 14.818s
OK (skipped=172)
Running distributed tests for the gloo backend with file init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_fork.py', '-v'] ... [2021-04-23 13:31:23.876689]
test_Backend_enum_class (__main__.TestDistBackendWithFork) ... Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_2D_Input (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_non_default_stream (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_requires_grad (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_with_grad_is_view (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_DistributedSampler_padding (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_SyncBatchNorm_process_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_simple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_coalesced_with_empty (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all gather'
test_all_gather_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all gather'
test_all_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_gather_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports allgather multigpu'
test_all_gather_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports allgather multigpu'
test_all_reduce_coalesced_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_max_complex_unsupported (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_complex_unsupported_ops (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_multigpu_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_result_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_async (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_complex (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_all_to_all (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL supports CUDA all_to_all'
test_all_to_all_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL supports CUDA all_to_all'
test_all_to_all_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_equal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split_full_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_all_to_all_single_unequal_split_group (__main__.TestDistBackendWithFork) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA all_to_all_single'
test_allgather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_backend_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_full_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_group_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_global (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_group (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/50699 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_batch_isend_irecv_gloo (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_gloo_tags (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_mixed_backend_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_no_rank_zero_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_op_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_op_list_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_self_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_batch_isend_irecv_tensor_err (__main__.TestDistBackendWithFork) ... skipped 'NCCL Batch Send Recv Only'
test_broadcast (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_cuda (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/47645 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_broadcast_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_broadcast_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_different_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_same_across_ranks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_device (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_grad_div_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_hook_parity_allreduce_process_group (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_hook_parity_powerSGD (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports DDP communication hook'
test_ddp_ignore_params_arg (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_join_model_equivalence (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_cpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_gpu (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_namedtuple (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_shared_grad_acc_unused_params (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_sync_params_and_buffers (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_join_disable (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs_replicated_error (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_ddp_unused_params_rebuild_buckets_exception (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_destroy_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_dump_DDP_relevant_env_vars (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_checks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_gather_object (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_backend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_get_rank_size_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_irecv (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_isend (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_nccl_backend_bool_allgather (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_allreduce (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_broadcast (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_backend_bool_reduce (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_gather_object_err (__main__.TestDistBackendWithFork) ... skipped "Test requires backend to be one of {'nccl'}"
test_nccl_high_priority_stream (__main__.TestDistBackendWithFork) ... skipped 'Only NCCL backend supports high priority stream'
test_reduce_full_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_full_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_group_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_max (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_min (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_multigpu (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl backend supports reduce multigpu'
test_reduce_product (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA reduce'
test_reduce_sum_cuda_twice (__main__.TestDistBackendWithFork) ... skipped 'Only Nccl supports CUDA reduce'
test_reduce_sum_twice (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_checks (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_full_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_group (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_scatter_object_list (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/52283 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_send_recv_any_source (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_send_recv_nccl (__main__.TestDistBackendWithFork) ... skipped 'NCCL Send Recv Only'
test_send_recv_with_tag (__main__.TestDistBackendWithFork) ... skipped 'Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/52284 To enable set the environment variable PYTORCH_RUN_DISABLED_TESTS=1'
test_sparse_all_reduce_sum (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
test_sparse_all_reduce_sum_cuda (__main__.TestDistBackendWithFork) ... skipped 'Need at least 2 CUDA devices'
----------------------------------------------------------------------
Ran 172 tests in 14.819s
OK (skipped=172)
Running distributed/test_distributed_spawn ... [2021-04-23 13:31:39.364562]
MPI not available -- MPI backend tests will be skipped
Running distributed tests for the test backend with env init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_spawn.py', '-v'] ... [2021-04-23 13:31:39.368837]
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running distributed tests for the test backend with file init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_spawn.py', '-v'] ... [2021-04-23 13:31:40.035901]
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Fail to import hypothesis in common_utils, tests are not derandomized
Running distributed tests for the nccl backend with env init_method
Executing ['/usr/bin/python3', 'distributed/test_distributed_spawn.py', '-v'] ... [2021-04-23 13:31:40.668010]
test_Backend_enum_class (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallelCPU (__main__.TestDistBackendWithSpawn) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallelCPU_grad_is_view (__main__.TestDistBackendWithSpawn) ... skipped 'nccl does not support DDP on CPU models'
test_DistributedDataParallel_SyncBatchNorm (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_2D_Input (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_non_default_stream (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_requires_grad (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedDataParallel_with_grad_is_view (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_DistributedSampler_padding (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_SyncBatchNorm_process_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_gather (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_coalesced_complex (__main__.TestDistBackendWithSpawn) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_group (__main__.TestDistBackendWithSpawn) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_simple (__main__.TestDistBackendWithSpawn) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_coalesced_with_empty (__main__.TestDistBackendWithSpawn) ... skipped 'all_gather_coalesced does not support NCCL'
test_all_gather_complex (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_cuda (__main__.TestDistBackendWithSpawn) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_cuda_complex (__main__.TestDistBackendWithSpawn) ... skipped 'CUDA all gather skipped for NCCL'
test_all_gather_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_group (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_gather_multigpu (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_gather_multigpu_complex (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_reduce_coalesced_full_group_max (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_min (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_product (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_full_group_sum (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_max (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_min (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_product (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_group_sum (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_max_complex_unsupported (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_coalesced_min (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_product (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_coalesced_sum (__main__.TestDistBackendWithSpawn) ... skipped "Test requires backend to be one of {'gloo'}"
test_all_reduce_complex_unsupported_ops (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_max (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_min (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_product (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_full_group_sum (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_max (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_min (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_product (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_group_sum (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_max (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_min (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_multigpu (__main__.TestDistBackendWithSpawn) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_multigpu_complex (__main__.TestDistBackendWithSpawn) ... skipped 'CUDA all_reduce multigpu skipped for NCCL'
test_all_reduce_product (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_result_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_async (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_complex (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_all_reduce_sum_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_async (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_reduce_sum_cuda_complex (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_full_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports all_to_all'
test_all_to_all_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_full_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_equal_split_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_equal_split_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_full_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_all_to_all_single_unequal_split_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only MPI supports CPU all_to_all_single'
test_all_to_all_single_unequal_split_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_allgather_object (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_backend_full_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_backend_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_barrier (__main__.TestDistBackendWithSpawn) ... skipped 'NCCL does not support CPU barrier'
test_barrier_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_barrier_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'NCCL does not support CPU barrier'
test_barrier_full_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_barrier_group (__main__.TestDistBackendWithSpawn) ... skipped 'NCCL does not support CPU barrier'
test_barrier_group_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_barrier_timeout_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_global (__main__.TestDistBackendWithSpawn) ... skipped 'Only gloo backend supports timeouts'
test_barrier_timeout_group (__main__.TestDistBackendWithSpawn) ... skipped 'Only gloo backend supports timeouts'
test_batch_isend_irecv_gloo (__main__.TestDistBackendWithSpawn) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_gloo_tags (__main__.TestDistBackendWithSpawn) ... skipped 'GLOO Batch Send Recv CPU'
test_batch_isend_irecv_mixed_backend_err (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_nccl (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_no_rank_zero_nccl (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_err (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_op_list_err (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_self_nccl (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_batch_isend_irecv_tensor_err (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_broadcast (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_cuda (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_broadcast_full_group (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_group (__main__.TestDistBackendWithSpawn) ... skipped 'Nccl does not support CPU tensors'
test_broadcast_multigpu (__main__.TestDistBackendWithSpawn) ... skipped 'NCCL broadcast multigpu skipped'
test_broadcast_object_list (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_different_across_ranks (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_control_flow_same_across_ranks (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_device (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_grad_div_uneven_inputs (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_allreduce_process_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_hook_parity_powerSGD (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_ignore_params_arg (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_join_model_equivalence (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_logging_data_cpu (__main__.TestDistBackendWithSpawn) ... skipped 'nccl does not support DDP on CPU models'
test_ddp_logging_data_gpu (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_namedtuple (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_shared_grad_acc_unused_params (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_sync_params_and_buffers (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_exception (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_input_join_disable (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_uneven_inputs_replicated_error (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_ddp_unused_params_rebuild_buckets_exception (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_destroy_full_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_destroy_group (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypothesis in common_utils, tests are not derandomized
skipped 'Need at least 2 CUDA devices'
test_dump_DDP_relevant_env_vars (__main__.TestDistBackendWithSpawn) ... Fail to import hypothesis in common_utils, tests are not derandomized
Fail to import hypo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment