==> Building on charizard ==> Checking for remote environment... ==> Syncing package to remote host... sending incremental file list ./ 0001-Fix-StopIteration-handling-which-breaks-in-python-3..patch 700 14% 0.00kB/s 0:00:00 4,953 100% 4.06MB/s 0:00:00 (xfr#1, to-chk=2/4) PKGBUILD 1,490 100% 1.42MB/s 0:00:00 1,490 100% 1.42MB/s 0:00:00 (xfr#2, to-chk=1/4) python-networkx-3.1-1.log 383 100% 374.02kB/s 0:00:00 383 100% 374.02kB/s 0:00:00 (xfr#3, to-chk=0/4) sent 1,277 bytes received 142 bytes 2,838.00 bytes/sec total size is 6,639 speedup is 4.68 ==> Running extra-riscv64-build -- -d /home/felix/packages/riscv64-pkg-cache:/var/cache/pacman/pkg -l root2 on remote host... [?25l:: Synchronizing package databases... core downloading... extra downloading... :: Starting full system upgrade... resolving dependencies... looking for conflicting packages... Package (1) Old Version New Version Net Change Download Size core/groff 1.23.0-1 1.23.0-2 0.00 MiB 2.30 MiB Total Download Size: 2.30 MiB Total Installed Size: 9.32 MiB Net Upgrade Size: 0.00 MiB :: Proceed with installation? [Y/n] :: Retrieving packages... groff-1.23.0-2-riscv64 downloading... checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... upgrading groff... :: Running post-transaction hooks... (1/1) Updating the info directory file... [?25h==> Building in chroot for [extra] (riscv64)... ==> Synchronizing chroot copy [/var/lib/archbuild/extra-riscv64/root] -> [root2]...done ==> Making package: python-networkx 3.1-1 (Wed Jul 12 03:34:18 2023) ==> Retrieving sources...  -> Downloading networkx-3.1.tar.gz... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 91495 0 91495 0 0 32492 0 --:--:-- 0:00:02 --:--:-- 129k 100 1460k 0 1460k 0 0 382k 0 --:--:-- 0:00:03 --:--:-- 866k 100 2084k 0 2084k 0 0 506k 0 --:--:-- 0:00:04 --:--:-- 1048k ==> Validating source files with sha512sums... networkx-3.1.tar.gz ... Passed ==> Making package: python-networkx 3.1-1 (Wed Jul 12 03:34:43 2023) ==> Checking runtime dependencies... ==> Installing missing dependencies... [?25lresolving dependencies... looking for conflicting packages... warning: dependency cycle detected: warning: harfbuzz will be installed before its freetype2 dependency Package (56) New Version Net Change Download Size extra/blas 3.11.0-2 0.20 MiB extra/cblas 3.11.0-2 0.17 MiB extra/freetype2 2.13.1-1 1.51 MiB extra/fribidi 1.0.13-2 0.23 MiB extra/graphite 1:1.3.14-3 0.17 MiB extra/harfbuzz 7.3.0-2 3.53 MiB extra/lapack 3.11.0-2 4.26 MiB extra/lcms2 2.15-1 0.58 MiB extra/libimagequant 4.1.1-1 0.56 MiB extra/libjpeg-turbo 2.1.5.1-1 1.38 MiB core/libnsl 2.0.0-3 0.06 MiB extra/libpng 1.6.40-2 0.51 MiB extra/libraqm 0.10.1-1 0.18 MiB extra/libtiff 4.5.1-1 5.72 MiB extra/libxau 1.0.11-2 0.02 MiB extra/libxcb 1.15-2 3.60 MiB extra/libxdmcp 1.1.4-2 0.12 MiB extra/openjpeg2 2.5.0-2 13.14 MiB core/python 3.11.3-2 107.17 MiB extra/python-autocommand 2.2.2-4 0.08 MiB extra/python-chardet 5.1.0-3 3.02 MiB extra/python-contourpy 1.1.0-1 0.60 MiB 0.19 MiB extra/python-cycler 0.11.0-3 0.06 MiB 0.01 MiB extra/python-dateutil 2.8.2-5 1.05 MiB extra/python-fastjsonschema 2.17.1-1 0.29 MiB extra/python-fonttools 4.40.0-1 17.80 MiB 2.52 MiB extra/python-idna 3.4-3 0.71 MiB extra/python-inflect 6.1.0-1 0.38 MiB extra/python-jaraco.context 4.3.0-3 0.04 MiB extra/python-jaraco.functools 3.8.0-1 0.07 MiB extra/python-jaraco.text 3.11.1-3 0.09 MiB extra/python-kiwisolver 1.4.4-4 0.11 MiB 0.05 MiB extra/python-more-itertools 9.1.1-4 0.61 MiB extra/python-ordered-set 4.1.0-4 0.07 MiB extra/python-packaging 23.1-1 0.47 MiB extra/python-pillow 9.5.0-2 4.00 MiB extra/python-platformdirs 3.8.1-1 0.21 MiB 0.03 MiB extra/python-pooch 1.7.0-4 0.73 MiB 0.11 MiB extra/python-pydantic 1.10.9-1 6.57 MiB extra/python-pyparsing 3.0.9-3 1.29 MiB extra/python-pytz 2023.3-1 0.17 MiB extra/python-requests 2.28.2-4 0.61 MiB extra/python-setuptools 1:67.7.0-1 4.68 MiB extra/python-six 1.16.0-8 0.12 MiB extra/python-tomli 2.0.1-3 0.11 MiB extra/python-trove-classifiers 2023.7.6-1 0.11 MiB extra/python-typing_extensions 4.7.0-1 0.37 MiB extra/python-urllib3 1.26.15-1 1.30 MiB extra/python-validate-pyproject 0.13-1 0.29 MiB extra/qhull 2020.2-4 8.11 MiB extra/xcb-proto 1.15.2-3 1.01 MiB extra/xorgproto 2023.2-1 1.43 MiB extra/python-matplotlib 3.7.1-4 26.92 MiB 5.65 MiB extra/python-numpy 1.25.1-1 41.34 MiB 6.07 MiB extra/python-pandas 1.5.3-3 86.19 MiB 12.46 MiB extra/python-scipy 1.11.1-1 92.43 MiB 19.10 MiB Total Download Size: 46.18 MiB Total Installed Size: 446.55 MiB :: Proceed with installation? [Y/n] :: Retrieving packages... python-scipy-1.11.1-1-riscv64 downloading... python-pandas-1.5.3-3-riscv64 downloading... python-numpy-1.25.1-1-riscv64 downloading... python-matplotlib-3.7.1-4-riscv64 downloading... python-fonttools-4.40.0-1-riscv64 downloading... python-contourpy-1.1.0-1-riscv64 downloading... python-pooch-1.7.0-4-any downloading... python-kiwisolver-1.4.4-4-riscv64 downloading... python-platformdirs-3.8.1-1-any downloading... python-cycler-0.11.0-3-any downloading... checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing blas... installing cblas... installing lapack... installing libnsl... installing python... Optional dependencies for python python-setuptools [pending] python-pip sqlite [installed] mpdecimal: for decimal xz: for lzma [installed] tk: for tkinter installing python-numpy... Optional dependencies for python-numpy openblas: faster linear algebra installing python-urllib3... Optional dependencies for python-urllib3 python-brotli: Brotli support python-certifi: security support python-cryptography: security support python-idna: security support [pending] python-pyopenssl: security support python-pysocks: SOCKS support installing python-chardet... installing python-idna... installing python-requests... Optional dependencies for python-requests python-pysocks: SOCKS proxy support installing python-typing_extensions... installing python-platformdirs... installing python-pooch... installing python-scipy... Optional dependencies for python-scipy python-pillow: for image saving module [pending] installing libpng... installing graphite... Optional dependencies for graphite graphite-docs: Documentation installing harfbuzz... Optional dependencies for harfbuzz harfbuzz-utils: utilities installing freetype2... installing python-contourpy... Optional dependencies for python-contourpy python-matplotlib: matplotlib renderer [pending] installing python-six... installing python-cycler... installing python-dateutil... installing python-fonttools... Optional dependencies for python-fonttools python-brotli: to compress/decompress WOFF 2.0 web fonts python-fs: to read/write UFO source files python-lxml: faster backend for XML files reading/writing python-lz4: for graphite type tables in ttLib/tables python-matplotlib: for visualizing DesignSpaceDocument and resulting VariationModel [pending] python-pyqt5: for drawing glyphs with Qt’s QPainterPath python-reportlab: to drawing glyphs as PNG images python-scipy: for finding wrong contour/component order between different masters [installed] python-sympy: for symbolic font statistics analysis python-uharfbuzz: to use the Harfbuzz Repacker for packing GSUB/GPOS tables python-unicodedata2: for displaying the Unicode character names when dumping the cmap table with ttx python-zopfli: faster backend fom WOFF 1.0 web fonts compression installing python-kiwisolver... installing python-packaging... installing libjpeg-turbo... Optional dependencies for libjpeg-turbo java-runtime>11: for TurboJPEG Java wrapper installing libtiff... Optional dependencies for libtiff freeglut: for using tiffgt installing lcms2... installing fribidi... installing libraqm... installing openjpeg2... installing libimagequant... installing xcb-proto... installing xorgproto... installing libxdmcp... installing libxau... installing libxcb... installing python-pillow... Optional dependencies for python-pillow libwebp: for webp images tk: for the ImageTK module python-olefile: OLE2 file support python-pyqt5: for the ImageQt module installing python-pyparsing... Optional dependencies for python-pyparsing python-railroad-diagrams: for generating Railroad Diagrams python-jinja: for generating Railroad Diagrams installing qhull... installing python-matplotlib... Optional dependencies for python-matplotlib tk: Tk{Agg,Cairo} backends pyside2: alternative for Qt5{Agg,Cairo} backends pyside6: alternative for Qt6{Agg,Cairo} backends python-pyqt5: Qt5{Agg,Cairo} backends python-pyqt6: Qt6{Agg,Cairo} backends python-gobject: for GTK{3,4}{Agg,Cairo} backend python-wxpython: WX{Agg,Cairo} backend python-cairo: {GTK{3,4},Qt{5,6},Tk,WX}Cairo backends python-cairocffi: alternative for Cairo backends python-tornado: WebAgg backend ffmpeg: for saving movies imagemagick: for saving animated gifs ghostscript: usetex dependencies texlive-bin: usetex dependencies texlive-latexextra: usetex usage with pdflatex python-certifi: https support installing python-pytz... installing python-more-itertools... installing python-jaraco.functools... installing python-jaraco.context... installing python-autocommand... installing python-pydantic... Optional dependencies for python-pydantic python-dotenv: for .env file support python-email-validator: for email validation installing python-inflect... installing python-jaraco.text... installing python-ordered-set... installing python-tomli... installing python-fastjsonschema... installing python-trove-classifiers... installing python-validate-pyproject... installing python-setuptools... installing python-pandas... Optional dependencies for python-pandas python-pandas-datareader: pandas.io.data replacement (recommended) python-numexpr: accelerating certain numerical operations (recommended) python-bottleneck: accelerating certain types of nan evaluations (recommended) python-matplotlib: plotting [installed] python-jinja: conditional formatting with DataFrame.style python-tabulate: printing in Markdown-friendly format python-scipy: miscellaneous statistical functions [installed] python-numba: alternative execution engine python-xarray: pandas-like API for N-dimensional data python-xlrd: Excel XLS input python-xlwt: Excel XLS output python-openpyxl: Excel XLSX input/output python-xlsxwriter: alternative Excel XLSX output python-beautifulsoup4: read_html function (in any case) python-html5lib: read_html function (and/or python-lxml) python-lxml: read_xml, to_xml and read_html function (and/or python-html5lib) python-sqlalchemy: SQL database support python-psycopg2: PostgreSQL engine for sqlalchemy python-pymysql: MySQL engine for sqlalchemy python-pytables: HDF5-based reading / writing python-blosc: for msgpack compression using blosc zlib: compression for msgpack [installed] python-pyarrow: Parquet, ORC and feather reading/writing python-fsspec: handling files aside from local and HTTP python-pyqt5: read_clipboard function (only one needed) python-qtpy: read_clipboard function (only one needed) xclip: read_clipboard function (only one needed) xsel: read_clipboard function (only one needed) python-brotli: Brotli compression python-snappy: Snappy compression python-zstandard: Zstandard (zstd) compression [?25h==> Checking buildtime dependencies... ==> Installing missing dependencies... [?25lresolving dependencies... looking for conflicting packages... Package (53) New Version Net Change Download Size extra/aom 3.6.1-1 4.42 MiB extra/avahi 1:0.8+r22+gfd482a7-1 1.71 MiB extra/cairo 1.17.8-2 1.33 MiB extra/dav1d 1.2.1-1 0.58 MiB core/dbus 1.14.8-1 0.80 MiB extra/fontconfig 2:2.14.2-1 1.00 MiB extra/gd 2.3.3-6 0.55 MiB extra/gdk-pixbuf2 2.42.10-2 2.90 MiB extra/ghostscript 10.01.2-1 42.53 MiB extra/giflib 5.2.1-2 0.22 MiB extra/graphviz 8.0.5-2 10.33 MiB extra/gsfonts 20200910-3 3.11 MiB extra/gts 0.7.6.121130-2 0.50 MiB extra/ijs 0.35-5 0.11 MiB extra/jbig2dec 0.19-1 0.12 MiB extra/libavif 0.11.1-1 0.29 MiB extra/libcups 1:2.4.6-1 0.77 MiB extra/libdaemon 0.14-5 0.05 MiB extra/libdatrie 0.2.13-4 0.47 MiB extra/libde265 1.0.12-1 0.58 MiB extra/libheif 1.16.2-1 0.85 MiB extra/libice 1.1.1-2 0.33 MiB extra/libidn 1.41-1 0.75 MiB extra/libpaper 2.1.1-1 0.06 MiB 0.02 MiB extra/librsvg 2:2.56.2-1 7.80 MiB extra/libsm 1.2.4-1 0.25 MiB extra/libthai 0.1.29-3 1.21 MiB extra/libwebp 1.3.1-1 0.75 MiB extra/libx11 1.8.6-1 9.73 MiB extra/libxext 1.3.5-1 0.29 MiB extra/libxft 2.3.8-1 0.11 MiB extra/libxpm 3.5.16-1 0.13 MiB extra/libxrender 0.9.11-1 0.08 MiB extra/libxslt 1.1.38-1 0.71 MiB extra/libxt 1.3.0-1 1.96 MiB extra/libyaml 0.2.5-2 0.15 MiB extra/libyuv r2322+3aebf69d-1 1.06 MiB core/lzo 2.10-5 0.34 MiB extra/netpbm 10.73.43-1 5.18 MiB extra/pango 1:1.50.14-1 2.18 MiB extra/pixman 0.42.2-1 0.40 MiB extra/poppler-data 0.4.12-1 12.34 MiB extra/python-iniconfig 2.0.0-4 0.04 MiB extra/python-pluggy 1.0.0-4 0.13 MiB extra/python-pytest 7.4.0-1 4.01 MiB extra/rav1e 0.6.6-1 4.26 MiB extra/shared-mime-info 2.2+13+ga2ffb28-1 4.51 MiB extra/svt-av1 1.6.0-1 3.38 MiB extra/x265 3.5-3 3.62 MiB extra/python-lxml 4.9.2-3 4.39 MiB extra/python-pydot 1.4.2-4 0.27 MiB 0.05 MiB extra/python-pytest-runner 6.0.0-5 0.04 MiB extra/python-yaml 6.0-3 0.93 MiB Total Download Size: 0.08 MiB Total Installed Size: 144.61 MiB :: Proceed with installation? [Y/n] :: Retrieving packages... python-pydot-1.4.2-4-any downloading... libpaper-2.1.1-1-riscv64 downloading... checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing python-iniconfig... installing python-pluggy... installing python-pytest... installing python-pytest-runner... installing libxslt... Optional dependencies for libxslt python: Python bindings [installed] installing python-lxml... Optional dependencies for python-lxml python-beautifulsoup4: support for beautifulsoup parser to parse not well formed HTML python-cssselect: support for cssselect python-html5lib: support for html5lib parser python-lxml-docs: offline docs installing fontconfig... Creating fontconfig configuration... Rebuilding fontconfig cache... installing libice... installing libsm... installing libx11... installing libxt... installing libxext... installing libxpm... installing giflib... installing libwebp... installing aom... installing dav1d... Optional dependencies for dav1d dav1d-doc: HTML documentation installing rav1e... installing svt-av1... installing libyuv... installing libavif... installing libde265... Optional dependencies for libde265 ffmpeg: for sherlock265 qt5-base: for sherlock265 sdl: dec265 YUV overlay output installing x265... installing libheif... Optional dependencies for libheif libjpeg: for heif-convert and heif-enc [installed] libpng: for heif-convert and heif-enc [installed] svt-av1: svt-av1 encoder [installed] rav1e: rav1e encoder [installed] installing gd... Optional dependencies for gd perl: bdftogd script [installed] installing libxrender... installing lzo... installing pixman... installing cairo... installing shared-mime-info... installing gdk-pixbuf2... Optional dependencies for gdk-pixbuf2 libwmf: Load .wmf and .apm libopenraw: Load .dng, .cr2, .crw, .nef, .orf, .pef, .arw, .erf, .mrw, and .raf libavif: Load .avif [installed] libheif: Load .heif, .heic, and .avif [installed] libjxl: Load .jxl librsvg: Load .svg, .svgz, and .svg.gz [pending] webp-pixbuf-loader: Load .webp installing libdatrie... installing libthai... installing libxft... installing pango... installing librsvg... installing dbus... installing libdaemon... installing avahi... Optional dependencies for avahi gtk3: avahi-discover, avahi-discover-standalone, bshell, bssh, bvnc libevent: libevent bindings [installed] nss-mdns: NSS support for mDNS python-dbus: avahi-bookmarks, avahi-discover python-gobject: avahi-bookmarks, avahi-discover python-twisted: avahi-bookmarks qt5-base: qt5 bindings installing libcups... installing jbig2dec... installing libpaper... installing ijs... installing libidn... installing poppler-data... installing ghostscript... Optional dependencies for ghostscript gtk3: needed for gsx installing netpbm... installing gts... installing gsfonts... installing graphviz... Warning: Could not load "/usr/lib/graphviz/libgvplugin_gdk.so.6" - It was found, so perhaps one of its dependents was not. Try ldd. Warning: Could not load "/usr/lib/graphviz/libgvplugin_gtk.so.6" - It was found, so perhaps one of its dependents was not. Try ldd. Warning: Could not load "/usr/lib/graphviz/libgvplugin_gdk.so.6" - It was found, so perhaps one of its dependents was not. Try ldd. Warning: Could not load "/usr/lib/graphviz/libgvplugin_gtk.so.6" - It was found, so perhaps one of its dependents was not. Try ldd. Optional dependencies for graphviz mono: sharp bindings guile: guile bindings [installed] lua: lua bindings ocaml: ocaml bindings perl: perl bindings [installed] python: python bindings [installed] r: r bindings tcl: tcl bindings qt6-base: gvedit gtk2: gtk output plugin xterm: vimdot installing python-pydot... installing libyaml... installing python-yaml... :: Running post-transaction hooks... (1/7) Updating the MIME type database... (2/7) Updating fontconfig configuration... (3/7) Reloading system bus configuration... call to execv failed (No such file or directory) error: command failed to execute correctly (4/7) Warn about old perl modules (5/7) Updating fontconfig cache... (6/7) Probing GDK-Pixbuf loader modules... (7/7) Updating the info directory file... [?25h==> Retrieving sources...  -> Found networkx-3.1.tar.gz ==> WARNING: Skipping all source file integrity checks. ==> Extracting sources...  -> Extracting networkx-3.1.tar.gz with bsdtar ==> Starting build()... running build running build_py creating build creating build/lib creating build/lib/networkx copying networkx/__init__.py -> build/lib/networkx copying networkx/conftest.py -> build/lib/networkx copying networkx/convert.py -> build/lib/networkx copying networkx/convert_matrix.py -> build/lib/networkx copying networkx/exception.py -> build/lib/networkx copying networkx/lazy_imports.py -> build/lib/networkx copying networkx/relabel.py -> build/lib/networkx creating build/lib/networkx/algorithms copying networkx/algorithms/__init__.py -> build/lib/networkx/algorithms copying networkx/algorithms/asteroidal.py -> build/lib/networkx/algorithms copying networkx/algorithms/boundary.py -> build/lib/networkx/algorithms copying networkx/algorithms/bridges.py -> build/lib/networkx/algorithms copying networkx/algorithms/chains.py -> build/lib/networkx/algorithms copying networkx/algorithms/chordal.py -> build/lib/networkx/algorithms copying networkx/algorithms/clique.py -> build/lib/networkx/algorithms copying networkx/algorithms/cluster.py -> build/lib/networkx/algorithms copying networkx/algorithms/communicability_alg.py -> build/lib/networkx/algorithms copying networkx/algorithms/core.py -> build/lib/networkx/algorithms copying networkx/algorithms/covering.py -> build/lib/networkx/algorithms copying networkx/algorithms/cuts.py -> build/lib/networkx/algorithms copying networkx/algorithms/cycles.py -> build/lib/networkx/algorithms copying networkx/algorithms/d_separation.py -> build/lib/networkx/algorithms copying networkx/algorithms/dag.py -> build/lib/networkx/algorithms copying networkx/algorithms/distance_measures.py -> build/lib/networkx/algorithms copying networkx/algorithms/distance_regular.py -> build/lib/networkx/algorithms copying networkx/algorithms/dominance.py -> build/lib/networkx/algorithms copying networkx/algorithms/dominating.py -> build/lib/networkx/algorithms copying networkx/algorithms/efficiency_measures.py -> build/lib/networkx/algorithms copying networkx/algorithms/euler.py -> build/lib/networkx/algorithms copying networkx/algorithms/graph_hashing.py -> build/lib/networkx/algorithms copying networkx/algorithms/graphical.py -> build/lib/networkx/algorithms copying networkx/algorithms/hierarchy.py -> build/lib/networkx/algorithms copying networkx/algorithms/hybrid.py -> build/lib/networkx/algorithms copying networkx/algorithms/isolate.py -> build/lib/networkx/algorithms copying networkx/algorithms/link_prediction.py -> build/lib/networkx/algorithms copying networkx/algorithms/lowest_common_ancestors.py -> build/lib/networkx/algorithms copying networkx/algorithms/matching.py -> build/lib/networkx/algorithms copying networkx/algorithms/mis.py -> build/lib/networkx/algorithms copying networkx/algorithms/moral.py -> build/lib/networkx/algorithms copying networkx/algorithms/node_classification.py -> build/lib/networkx/algorithms copying networkx/algorithms/non_randomness.py -> build/lib/networkx/algorithms copying networkx/algorithms/planar_drawing.py -> build/lib/networkx/algorithms copying networkx/algorithms/planarity.py -> build/lib/networkx/algorithms copying networkx/algorithms/polynomials.py -> build/lib/networkx/algorithms copying networkx/algorithms/reciprocity.py -> build/lib/networkx/algorithms copying networkx/algorithms/regular.py -> build/lib/networkx/algorithms copying networkx/algorithms/richclub.py -> build/lib/networkx/algorithms copying networkx/algorithms/similarity.py -> build/lib/networkx/algorithms copying networkx/algorithms/simple_paths.py -> build/lib/networkx/algorithms copying networkx/algorithms/smallworld.py -> build/lib/networkx/algorithms copying networkx/algorithms/smetric.py -> build/lib/networkx/algorithms copying networkx/algorithms/sparsifiers.py -> build/lib/networkx/algorithms copying networkx/algorithms/structuralholes.py -> build/lib/networkx/algorithms copying networkx/algorithms/summarization.py -> build/lib/networkx/algorithms copying networkx/algorithms/swap.py -> build/lib/networkx/algorithms copying networkx/algorithms/threshold.py -> build/lib/networkx/algorithms copying networkx/algorithms/tournament.py -> build/lib/networkx/algorithms copying networkx/algorithms/triads.py -> build/lib/networkx/algorithms copying networkx/algorithms/vitality.py -> build/lib/networkx/algorithms copying networkx/algorithms/voronoi.py -> build/lib/networkx/algorithms copying networkx/algorithms/wiener.py -> build/lib/networkx/algorithms creating build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/__init__.py -> build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/connectivity.py -> build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/correlation.py -> build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/mixing.py -> build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/neighbor_degree.py -> build/lib/networkx/algorithms/assortativity copying networkx/algorithms/assortativity/pairs.py -> build/lib/networkx/algorithms/assortativity creating build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/__init__.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/basic.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/centrality.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/cluster.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/covering.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/edgelist.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/generators.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/matching.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/matrix.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/projection.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/redundancy.py -> build/lib/networkx/algorithms/bipartite copying networkx/algorithms/bipartite/spectral.py -> build/lib/networkx/algorithms/bipartite creating build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/__init__.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/betweenness.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/betweenness_subset.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/closeness.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/current_flow_betweenness.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/current_flow_betweenness_subset.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/current_flow_closeness.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/degree_alg.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/dispersion.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/eigenvector.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/flow_matrix.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/group.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/harmonic.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/katz.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/laplacian.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/load.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/percolation.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/reaching.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/second_order.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/subgraph_alg.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/trophic.py -> build/lib/networkx/algorithms/centrality copying networkx/algorithms/centrality/voterank_alg.py -> build/lib/networkx/algorithms/centrality creating build/lib/networkx/algorithms/community copying networkx/algorithms/community/__init__.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/asyn_fluid.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/centrality.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/community_utils.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/kclique.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/kernighan_lin.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/label_propagation.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/louvain.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/lukes.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/modularity_max.py -> build/lib/networkx/algorithms/community copying networkx/algorithms/community/quality.py -> build/lib/networkx/algorithms/community creating build/lib/networkx/algorithms/components copying networkx/algorithms/components/__init__.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/attracting.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/biconnected.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/connected.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/semiconnected.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/strongly_connected.py -> build/lib/networkx/algorithms/components copying networkx/algorithms/components/weakly_connected.py -> build/lib/networkx/algorithms/components creating build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/__init__.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/connectivity.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/cuts.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/disjoint_paths.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/edge_augmentation.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/edge_kcomponents.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/kcomponents.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/kcutsets.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/stoerwagner.py -> build/lib/networkx/algorithms/connectivity copying networkx/algorithms/connectivity/utils.py -> build/lib/networkx/algorithms/connectivity creating build/lib/networkx/algorithms/coloring copying networkx/algorithms/coloring/__init__.py -> build/lib/networkx/algorithms/coloring copying networkx/algorithms/coloring/equitable_coloring.py -> build/lib/networkx/algorithms/coloring copying networkx/algorithms/coloring/greedy_coloring.py -> build/lib/networkx/algorithms/coloring creating build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/__init__.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/boykovkolmogorov.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/capacityscaling.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/dinitz_alg.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/edmondskarp.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/gomory_hu.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/maxflow.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/mincost.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/networksimplex.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/preflowpush.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/shortestaugmentingpath.py -> build/lib/networkx/algorithms/flow copying networkx/algorithms/flow/utils.py -> build/lib/networkx/algorithms/flow creating build/lib/networkx/algorithms/minors copying networkx/algorithms/minors/__init__.py -> build/lib/networkx/algorithms/minors copying networkx/algorithms/minors/contraction.py -> build/lib/networkx/algorithms/minors creating build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/__init__.py -> build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/beamsearch.py -> build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/breadth_first_search.py -> build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/depth_first_search.py -> build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/edgebfs.py -> build/lib/networkx/algorithms/traversal copying networkx/algorithms/traversal/edgedfs.py -> build/lib/networkx/algorithms/traversal creating build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/__init__.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/ismags.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/isomorph.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/isomorphvf2.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/matchhelpers.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/temporalisomorphvf2.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/tree_isomorphism.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/vf2pp.py -> build/lib/networkx/algorithms/isomorphism copying networkx/algorithms/isomorphism/vf2userfunc.py -> build/lib/networkx/algorithms/isomorphism creating build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/__init__.py -> build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/astar.py -> build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/dense.py -> build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/generic.py -> build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/unweighted.py -> build/lib/networkx/algorithms/shortest_paths copying networkx/algorithms/shortest_paths/weighted.py -> build/lib/networkx/algorithms/shortest_paths creating build/lib/networkx/algorithms/link_analysis copying networkx/algorithms/link_analysis/__init__.py -> build/lib/networkx/algorithms/link_analysis copying networkx/algorithms/link_analysis/hits_alg.py -> build/lib/networkx/algorithms/link_analysis copying networkx/algorithms/link_analysis/pagerank_alg.py -> build/lib/networkx/algorithms/link_analysis creating build/lib/networkx/algorithms/operators copying networkx/algorithms/operators/__init__.py -> build/lib/networkx/algorithms/operators copying networkx/algorithms/operators/all.py -> build/lib/networkx/algorithms/operators copying networkx/algorithms/operators/binary.py -> build/lib/networkx/algorithms/operators copying networkx/algorithms/operators/product.py -> build/lib/networkx/algorithms/operators copying networkx/algorithms/operators/unary.py -> build/lib/networkx/algorithms/operators creating build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/__init__.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/clique.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/clustering_coefficient.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/connectivity.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/distance_measures.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/dominating_set.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/kcomponents.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/matching.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/maxcut.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/ramsey.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/steinertree.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/traveling_salesman.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/treewidth.py -> build/lib/networkx/algorithms/approximation copying networkx/algorithms/approximation/vertex_cover.py -> build/lib/networkx/algorithms/approximation creating build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/__init__.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/branchings.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/coding.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/decomposition.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/mst.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/operations.py -> build/lib/networkx/algorithms/tree copying networkx/algorithms/tree/recognition.py -> build/lib/networkx/algorithms/tree creating build/lib/networkx/classes copying networkx/classes/__init__.py -> build/lib/networkx/classes copying networkx/classes/backends.py -> build/lib/networkx/classes copying networkx/classes/coreviews.py -> build/lib/networkx/classes copying networkx/classes/digraph.py -> build/lib/networkx/classes copying networkx/classes/filters.py -> build/lib/networkx/classes copying networkx/classes/function.py -> build/lib/networkx/classes copying networkx/classes/graph.py -> build/lib/networkx/classes copying networkx/classes/graphviews.py -> build/lib/networkx/classes copying networkx/classes/multidigraph.py -> build/lib/networkx/classes copying networkx/classes/multigraph.py -> build/lib/networkx/classes copying networkx/classes/reportviews.py -> build/lib/networkx/classes creating build/lib/networkx/generators copying networkx/generators/__init__.py -> build/lib/networkx/generators copying networkx/generators/atlas.py -> build/lib/networkx/generators copying networkx/generators/classic.py -> build/lib/networkx/generators copying networkx/generators/cographs.py -> build/lib/networkx/generators copying networkx/generators/community.py -> build/lib/networkx/generators copying networkx/generators/degree_seq.py -> build/lib/networkx/generators copying networkx/generators/directed.py -> build/lib/networkx/generators copying networkx/generators/duplication.py -> build/lib/networkx/generators copying networkx/generators/ego.py -> build/lib/networkx/generators copying networkx/generators/expanders.py -> build/lib/networkx/generators copying networkx/generators/geometric.py -> build/lib/networkx/generators copying networkx/generators/harary_graph.py -> build/lib/networkx/generators copying networkx/generators/internet_as_graphs.py -> build/lib/networkx/generators copying networkx/generators/intersection.py -> build/lib/networkx/generators copying networkx/generators/interval_graph.py -> build/lib/networkx/generators copying networkx/generators/joint_degree_seq.py -> build/lib/networkx/generators copying networkx/generators/lattice.py -> build/lib/networkx/generators copying networkx/generators/line.py -> build/lib/networkx/generators copying networkx/generators/mycielski.py -> build/lib/networkx/generators copying networkx/generators/nonisomorphic_trees.py -> build/lib/networkx/generators copying networkx/generators/random_clustered.py -> build/lib/networkx/generators copying networkx/generators/random_graphs.py -> build/lib/networkx/generators copying networkx/generators/small.py -> build/lib/networkx/generators copying networkx/generators/social.py -> build/lib/networkx/generators copying networkx/generators/spectral_graph_forge.py -> build/lib/networkx/generators copying networkx/generators/stochastic.py -> build/lib/networkx/generators copying networkx/generators/sudoku.py -> build/lib/networkx/generators copying networkx/generators/trees.py -> build/lib/networkx/generators copying networkx/generators/triads.py -> build/lib/networkx/generators creating build/lib/networkx/drawing copying networkx/drawing/__init__.py -> build/lib/networkx/drawing copying networkx/drawing/layout.py -> build/lib/networkx/drawing copying networkx/drawing/nx_agraph.py -> build/lib/networkx/drawing copying networkx/drawing/nx_latex.py -> build/lib/networkx/drawing copying networkx/drawing/nx_pydot.py -> build/lib/networkx/drawing copying networkx/drawing/nx_pylab.py -> build/lib/networkx/drawing creating build/lib/networkx/linalg copying networkx/linalg/__init__.py -> build/lib/networkx/linalg copying networkx/linalg/algebraicconnectivity.py -> build/lib/networkx/linalg copying networkx/linalg/attrmatrix.py -> build/lib/networkx/linalg copying networkx/linalg/bethehessianmatrix.py -> build/lib/networkx/linalg copying networkx/linalg/graphmatrix.py -> build/lib/networkx/linalg copying networkx/linalg/laplacianmatrix.py -> build/lib/networkx/linalg copying networkx/linalg/modularitymatrix.py -> build/lib/networkx/linalg copying networkx/linalg/spectrum.py -> build/lib/networkx/linalg creating build/lib/networkx/readwrite copying networkx/readwrite/__init__.py -> build/lib/networkx/readwrite copying networkx/readwrite/adjlist.py -> build/lib/networkx/readwrite copying networkx/readwrite/edgelist.py -> build/lib/networkx/readwrite copying networkx/readwrite/gexf.py -> build/lib/networkx/readwrite copying networkx/readwrite/gml.py -> build/lib/networkx/readwrite copying networkx/readwrite/graph6.py -> build/lib/networkx/readwrite copying networkx/readwrite/graphml.py -> build/lib/networkx/readwrite copying networkx/readwrite/leda.py -> build/lib/networkx/readwrite copying networkx/readwrite/multiline_adjlist.py -> build/lib/networkx/readwrite copying networkx/readwrite/p2g.py -> build/lib/networkx/readwrite copying networkx/readwrite/pajek.py -> build/lib/networkx/readwrite copying networkx/readwrite/sparse6.py -> build/lib/networkx/readwrite copying networkx/readwrite/text.py -> build/lib/networkx/readwrite creating build/lib/networkx/readwrite/json_graph copying networkx/readwrite/json_graph/__init__.py -> build/lib/networkx/readwrite/json_graph copying networkx/readwrite/json_graph/adjacency.py -> build/lib/networkx/readwrite/json_graph copying networkx/readwrite/json_graph/cytoscape.py -> build/lib/networkx/readwrite/json_graph copying networkx/readwrite/json_graph/node_link.py -> build/lib/networkx/readwrite/json_graph copying networkx/readwrite/json_graph/tree.py -> build/lib/networkx/readwrite/json_graph creating build/lib/networkx/tests copying networkx/tests/__init__.py -> build/lib/networkx/tests copying networkx/tests/test_all_random_functions.py -> build/lib/networkx/tests copying networkx/tests/test_convert.py -> build/lib/networkx/tests copying networkx/tests/test_convert_numpy.py -> build/lib/networkx/tests copying networkx/tests/test_convert_pandas.py -> build/lib/networkx/tests copying networkx/tests/test_convert_scipy.py -> build/lib/networkx/tests copying networkx/tests/test_exceptions.py -> build/lib/networkx/tests copying networkx/tests/test_import.py -> build/lib/networkx/tests copying networkx/tests/test_lazy_imports.py -> build/lib/networkx/tests copying networkx/tests/test_relabel.py -> build/lib/networkx/tests creating build/lib/networkx/utils copying networkx/utils/__init__.py -> build/lib/networkx/utils copying networkx/utils/decorators.py -> build/lib/networkx/utils copying networkx/utils/heaps.py -> build/lib/networkx/utils copying networkx/utils/mapped_queue.py -> build/lib/networkx/utils copying networkx/utils/misc.py -> build/lib/networkx/utils copying networkx/utils/random_sequence.py -> build/lib/networkx/utils copying networkx/utils/rcm.py -> build/lib/networkx/utils copying networkx/utils/union_find.py -> build/lib/networkx/utils creating build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/__init__.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_asteroidal.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_boundary.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_bridges.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_chains.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_chordal.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_clique.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_cluster.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_communicability.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_core.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_covering.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_cuts.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_cycles.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_d_separation.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_dag.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_distance_measures.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_distance_regular.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_dominance.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_dominating.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_efficiency.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_euler.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_graph_hashing.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_graphical.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_hierarchy.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_hybrid.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_isolate.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_link_prediction.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_lowest_common_ancestors.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_matching.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_max_weight_clique.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_mis.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_moral.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_node_classification.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_non_randomness.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_planar_drawing.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_planarity.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_polynomials.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_reciprocity.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_regular.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_richclub.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_similarity.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_simple_paths.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_smallworld.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_smetric.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_sparsifiers.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_structuralholes.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_summarization.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_swap.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_threshold.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_tournament.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_triads.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_vitality.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_voronoi.py -> build/lib/networkx/algorithms/tests copying networkx/algorithms/tests/test_wiener.py -> build/lib/networkx/algorithms/tests creating build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/__init__.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/base_test.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/test_connectivity.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/test_correlation.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/test_mixing.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/test_neighbor_degree.py -> build/lib/networkx/algorithms/assortativity/tests copying networkx/algorithms/assortativity/tests/test_pairs.py -> build/lib/networkx/algorithms/assortativity/tests creating build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/__init__.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_basic.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_centrality.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_cluster.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_covering.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_edgelist.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_generators.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_matching.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_matrix.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_project.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_redundancy.py -> build/lib/networkx/algorithms/bipartite/tests copying networkx/algorithms/bipartite/tests/test_spectral_bipartivity.py -> build/lib/networkx/algorithms/bipartite/tests creating build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/__init__.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_betweenness_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_closeness_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_current_flow_closeness.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_degree_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_dispersion.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_eigenvector_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_group.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_harmonic_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_katz_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_laplacian_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_load_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_percolation_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_reaching.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_second_order_centrality.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_subgraph.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_trophic.py -> build/lib/networkx/algorithms/centrality/tests copying networkx/algorithms/centrality/tests/test_voterank.py -> build/lib/networkx/algorithms/centrality/tests creating build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/__init__.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_asyn_fluid.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_centrality.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_kclique.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_kernighan_lin.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_label_propagation.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_louvain.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_lukes.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_modularity_max.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_quality.py -> build/lib/networkx/algorithms/community/tests copying networkx/algorithms/community/tests/test_utils.py -> build/lib/networkx/algorithms/community/tests creating build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/__init__.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_attracting.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_biconnected.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_connected.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_semiconnected.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_strongly_connected.py -> build/lib/networkx/algorithms/components/tests copying networkx/algorithms/components/tests/test_weakly_connected.py -> build/lib/networkx/algorithms/components/tests creating build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/__init__.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_connectivity.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_cuts.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_disjoint_paths.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_edge_augmentation.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_edge_kcomponents.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_kcomponents.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_kcutsets.py -> build/lib/networkx/algorithms/connectivity/tests copying networkx/algorithms/connectivity/tests/test_stoer_wagner.py -> build/lib/networkx/algorithms/connectivity/tests creating build/lib/networkx/algorithms/coloring/tests copying networkx/algorithms/coloring/tests/__init__.py -> build/lib/networkx/algorithms/coloring/tests copying networkx/algorithms/coloring/tests/test_coloring.py -> build/lib/networkx/algorithms/coloring/tests creating build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/__init__.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/test_gomory_hu.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/test_maxflow.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/test_maxflow_large_graph.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/test_mincost.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/test_networksimplex.py -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/gl1.gpickle.bz2 -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/gw1.gpickle.bz2 -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/netgen-2.gpickle.bz2 -> build/lib/networkx/algorithms/flow/tests copying networkx/algorithms/flow/tests/wlm3.gpickle.bz2 -> build/lib/networkx/algorithms/flow/tests creating build/lib/networkx/algorithms/minors/tests copying networkx/algorithms/minors/tests/test_contraction.py -> build/lib/networkx/algorithms/minors/tests creating build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/__init__.py -> build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/test_beamsearch.py -> build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/test_bfs.py -> build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/test_dfs.py -> build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/test_edgebfs.py -> build/lib/networkx/algorithms/traversal/tests copying networkx/algorithms/traversal/tests/test_edgedfs.py -> build/lib/networkx/algorithms/traversal/tests creating build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/__init__.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_ismags.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_isomorphism.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_isomorphvf2.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_match_helpers.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_temporalisomorphvf2.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_tree_isomorphism.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_vf2pp.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_vf2pp_helpers.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/test_vf2userfunc.py -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/iso_r01_s80.A99 -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/iso_r01_s80.B99 -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/si2_b06_m200.A99 -> build/lib/networkx/algorithms/isomorphism/tests copying networkx/algorithms/isomorphism/tests/si2_b06_m200.B99 -> build/lib/networkx/algorithms/isomorphism/tests creating build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/__init__.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_astar.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_dense.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_dense_numpy.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_generic.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_unweighted.py -> build/lib/networkx/algorithms/shortest_paths/tests copying networkx/algorithms/shortest_paths/tests/test_weighted.py -> build/lib/networkx/algorithms/shortest_paths/tests creating build/lib/networkx/algorithms/link_analysis/tests copying networkx/algorithms/link_analysis/tests/__init__.py -> build/lib/networkx/algorithms/link_analysis/tests copying networkx/algorithms/link_analysis/tests/test_hits.py -> build/lib/networkx/algorithms/link_analysis/tests copying networkx/algorithms/link_analysis/tests/test_pagerank.py -> build/lib/networkx/algorithms/link_analysis/tests creating build/lib/networkx/algorithms/operators/tests copying networkx/algorithms/operators/tests/__init__.py -> build/lib/networkx/algorithms/operators/tests copying networkx/algorithms/operators/tests/test_all.py -> build/lib/networkx/algorithms/operators/tests copying networkx/algorithms/operators/tests/test_binary.py -> build/lib/networkx/algorithms/operators/tests copying networkx/algorithms/operators/tests/test_product.py -> build/lib/networkx/algorithms/operators/tests copying networkx/algorithms/operators/tests/test_unary.py -> build/lib/networkx/algorithms/operators/tests creating build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/__init__.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_approx_clust_coeff.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_clique.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_connectivity.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_distance_measures.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_dominating_set.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_kcomponents.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_matching.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_maxcut.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_ramsey.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_steinertree.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_traveling_salesman.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_treewidth.py -> build/lib/networkx/algorithms/approximation/tests copying networkx/algorithms/approximation/tests/test_vertex_cover.py -> build/lib/networkx/algorithms/approximation/tests creating build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/__init__.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_branchings.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_coding.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_decomposition.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_mst.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_operations.py -> build/lib/networkx/algorithms/tree/tests copying networkx/algorithms/tree/tests/test_recognition.py -> build/lib/networkx/algorithms/tree/tests creating build/lib/networkx/classes/tests copying networkx/classes/tests/__init__.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/dispatch_interface.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/historical_tests.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_backends.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_coreviews.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_digraph.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_digraph_historical.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_filters.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_function.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_graph.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_graph_historical.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_graphviews.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_multidigraph.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_multigraph.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_reportviews.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_special.py -> build/lib/networkx/classes/tests copying networkx/classes/tests/test_subgraphviews.py -> build/lib/networkx/classes/tests creating build/lib/networkx/generators/tests copying networkx/generators/tests/__init__.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_atlas.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_classic.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_cographs.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_community.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_degree_seq.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_directed.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_duplication.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_ego.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_expanders.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_geometric.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_harary_graph.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_internet_as_graphs.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_intersection.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_interval_graph.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_joint_degree_seq.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_lattice.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_line.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_mycielski.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_nonisomorphic_trees.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_random_clustered.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_random_graphs.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_small.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_spectral_graph_forge.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_stochastic.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_sudoku.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_trees.py -> build/lib/networkx/generators/tests copying networkx/generators/tests/test_triads.py -> build/lib/networkx/generators/tests copying networkx/generators/atlas.dat.gz -> build/lib/networkx/generators creating build/lib/networkx/drawing/tests copying networkx/drawing/tests/__init__.py -> build/lib/networkx/drawing/tests copying networkx/drawing/tests/test_agraph.py -> build/lib/networkx/drawing/tests copying networkx/drawing/tests/test_latex.py -> build/lib/networkx/drawing/tests copying networkx/drawing/tests/test_layout.py -> build/lib/networkx/drawing/tests copying networkx/drawing/tests/test_pydot.py -> build/lib/networkx/drawing/tests copying networkx/drawing/tests/test_pylab.py -> build/lib/networkx/drawing/tests creating build/lib/networkx/drawing/tests/baseline copying networkx/drawing/tests/baseline/test_house_with_colors.png -> build/lib/networkx/drawing/tests/baseline creating build/lib/networkx/linalg/tests copying networkx/linalg/tests/__init__.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_algebraic_connectivity.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_attrmatrix.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_bethehessian.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_graphmatrix.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_laplacian.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_modularity.py -> build/lib/networkx/linalg/tests copying networkx/linalg/tests/test_spectrum.py -> build/lib/networkx/linalg/tests creating build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/__init__.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_adjlist.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_edgelist.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_gexf.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_gml.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_graph6.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_graphml.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_leda.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_p2g.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_pajek.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_sparse6.py -> build/lib/networkx/readwrite/tests copying networkx/readwrite/tests/test_text.py -> build/lib/networkx/readwrite/tests creating build/lib/networkx/readwrite/json_graph/tests copying networkx/readwrite/json_graph/tests/__init__.py -> build/lib/networkx/readwrite/json_graph/tests copying networkx/readwrite/json_graph/tests/test_adjacency.py -> build/lib/networkx/readwrite/json_graph/tests copying networkx/readwrite/json_graph/tests/test_cytoscape.py -> build/lib/networkx/readwrite/json_graph/tests copying networkx/readwrite/json_graph/tests/test_node_link.py -> build/lib/networkx/readwrite/json_graph/tests copying networkx/readwrite/json_graph/tests/test_tree.py -> build/lib/networkx/readwrite/json_graph/tests creating build/lib/networkx/utils/tests copying networkx/utils/tests/__init__.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test__init.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_decorators.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_heaps.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_mapped_queue.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_misc.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_random_sequence.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_rcm.py -> build/lib/networkx/utils/tests copying networkx/utils/tests/test_unionfind.py -> build/lib/networkx/utils/tests ==> Starting check()... running pytest /usr/lib/python3.11/site-packages/setuptools/command/test.py:194: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. !! ******************************************************************************** Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`. ******************************************************************************** !! ir_d = dist.fetch_build_eggs(dist.install_requires) WARNING: The wheel package is not available. /usr/lib/python3.11/site-packages/setuptools/command/test.py:195: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. !! ******************************************************************************** Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`. ******************************************************************************** !! tr_d = dist.fetch_build_eggs(dist.tests_require or []) WARNING: The wheel package is not available. /usr/lib/python3.11/site-packages/setuptools/command/test.py:196: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. !! ******************************************************************************** Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`. ******************************************************************************** !! er_d = dist.fetch_build_eggs( WARNING: The wheel package is not available. running egg_info creating networkx.egg-info writing networkx.egg-info/PKG-INFO writing dependency_links to networkx.egg-info/dependency_links.txt writing entry points to networkx.egg-info/entry_points.txt writing requirements to networkx.egg-info/requires.txt writing top-level names to networkx.egg-info/top_level.txt writing manifest file 'networkx.egg-info/SOURCES.txt' reading manifest file 'networkx.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching 'networkx/*/tests/*.txt' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc/build' no previously-included directories found matching 'doc/auto_examples' no previously-included directories found matching 'doc/modules' no previously-included directories found matching 'doc/reference/generated' no previously-included directories found matching 'doc/reference/algorithms/generated' no previously-included directories found matching 'doc/reference/classes/generated' no previously-included directories found matching 'doc/reference/readwrite/generated' adding license file 'LICENSE.txt' writing manifest file 'networkx.egg-info/SOURCES.txt' running build_ext collected 5106 items / 2 skipped networkx/algorithms/approximation/tests/test_approx_clust_coeff.py ..... [ 0%] . [ 0%] networkx/algorithms/approximation/tests/test_clique.py ........ [ 0%] networkx/algorithms/approximation/tests/test_connectivity.py ........... [ 0%] ....... [ 0%] networkx/algorithms/approximation/tests/test_distance_measures.py ...... [ 0%] .. [ 0%] networkx/algorithms/approximation/tests/test_dominating_set.py .... [ 0%] networkx/algorithms/approximation/tests/test_kcomponents.py ............ [ 1%] .... [ 1%] networkx/algorithms/approximation/tests/test_matching.py . [ 1%] networkx/algorithms/approximation/tests/test_maxcut.py ..... [ 1%] networkx/algorithms/approximation/tests/test_ramsey.py . [ 1%] networkx/algorithms/approximation/tests/test_steinertree.py .... [ 1%] networkx/algorithms/approximation/tests/test_traveling_salesman.py ..... [ 1%] ....................................s. [ 2%] networkx/algorithms/approximation/tests/test_treewidth.py .............. [ 2%] [ 2%] networkx/algorithms/approximation/tests/test_vertex_cover.py .... [ 2%] networkx/algorithms/assortativity/tests/test_connectivity.py .......... [ 2%] networkx/algorithms/assortativity/tests/test_correlation.py ............ [ 3%] ....... [ 3%] networkx/algorithms/assortativity/tests/test_mixing.py ................. [ 3%] .. [ 3%] networkx/algorithms/assortativity/tests/test_neighbor_degree.py ...... [ 3%] networkx/algorithms/assortativity/tests/test_pairs.py ........... [ 3%] networkx/algorithms/bipartite/tests/test_basic.py ............... [ 4%] networkx/algorithms/bipartite/tests/test_centrality.py ....... [ 4%] networkx/algorithms/bipartite/tests/test_cluster.py ......... [ 4%] networkx/algorithms/bipartite/tests/test_covering.py .... [ 4%] networkx/algorithms/bipartite/tests/test_edgelist.py ............... [ 4%] networkx/algorithms/bipartite/tests/test_generators.py .......... [ 5%] networkx/algorithms/bipartite/tests/test_matching.py ................... [ 5%] . [ 5%] networkx/algorithms/bipartite/tests/test_matrix.py ........... [ 5%] networkx/algorithms/bipartite/tests/test_project.py .................. [ 5%] networkx/algorithms/bipartite/tests/test_redundancy.py ... [ 6%] networkx/algorithms/bipartite/tests/test_spectral_bipartivity.py ... [ 6%] networkx/algorithms/centrality/tests/test_betweenness_centrality.py .... [ 6%] ..................................... [ 6%] networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py . [ 6%] ..................... [ 7%] networkx/algorithms/centrality/tests/test_closeness_centrality.py ...... [ 7%] ....... [ 7%] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py . [ 7%] F...F.....F........ [ 7%] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py F [ 8%] FFFFFFFF [ 8%] networkx/algorithms/centrality/tests/test_current_flow_closeness.py FFF [ 8%] networkx/algorithms/centrality/tests/test_degree_centrality.py ....... [ 8%] networkx/algorithms/centrality/tests/test_dispersion.py .... [ 8%] networkx/algorithms/centrality/tests/test_eigenvector_centrality.py .... [ 8%] ......... [ 8%] networkx/algorithms/centrality/tests/test_group.py ..................... [ 9%] ... [ 9%] networkx/algorithms/centrality/tests/test_harmonic_centrality.py ....... [ 9%] ...... [ 9%] networkx/algorithms/centrality/tests/test_katz_centrality.py ........... [ 9%] ............... [ 9%] networkx/algorithms/centrality/tests/test_laplacian_centrality.py ...... [ 10%] . [ 10%] networkx/algorithms/centrality/tests/test_load_centrality.py ........... [ 10%] ....... [ 10%] networkx/algorithms/centrality/tests/test_percolation_centrality.py ... [ 10%] networkx/algorithms/centrality/tests/test_reaching.py .............. [ 10%] networkx/algorithms/centrality/tests/test_second_order_centrality.py ... [ 10%] .... [ 10%] networkx/algorithms/centrality/tests/test_subgraph.py ..... [ 10%] networkx/algorithms/centrality/tests/test_trophic.py .......... [ 11%] networkx/algorithms/centrality/tests/test_voterank.py ...... [ 11%] networkx/algorithms/coloring/tests/test_coloring.py ................. [ 11%] networkx/algorithms/community/tests/test_asyn_fluid.py ..... [ 11%] networkx/algorithms/community/tests/test_centrality.py ..... [ 11%] networkx/algorithms/community/tests/test_kclique.py ........ [ 11%] networkx/algorithms/community/tests/test_kernighan_lin.py ........ [ 12%] networkx/algorithms/community/tests/test_label_propagation.py .......... [ 12%] . [ 12%] networkx/algorithms/community/tests/test_louvain.py .......... [ 12%] networkx/algorithms/community/tests/test_lukes.py .... [ 12%] networkx/algorithms/community/tests/test_modularity_max.py ............. [ 12%] ..... [ 12%] networkx/algorithms/community/tests/test_quality.py ....... [ 13%] networkx/algorithms/community/tests/test_utils.py .... [ 13%] networkx/algorithms/components/tests/test_attracting.py .... [ 13%] networkx/algorithms/components/tests/test_biconnected.py ............. [ 13%] networkx/algorithms/components/tests/test_connected.py ......... [ 13%] networkx/algorithms/components/tests/test_semiconnected.py ........ [ 13%] networkx/algorithms/components/tests/test_strongly_connected.py ........ [ 14%] ...... [ 14%] networkx/algorithms/components/tests/test_weakly_connected.py ...... [ 14%] networkx/algorithms/connectivity/tests/test_connectivity.py ............ [ 14%] ...................... [ 14%] networkx/algorithms/connectivity/tests/test_cuts.py .................... [ 15%] . [ 15%] networkx/algorithms/connectivity/tests/test_disjoint_paths.py .......... [ 15%] ........ [ 15%] networkx/algorithms/connectivity/tests/test_edge_augmentation.py ....... [ 15%] ............. [ 16%] networkx/algorithms/connectivity/tests/test_edge_kcomponents.py ........ [ 16%] ............. [ 16%] networkx/algorithms/connectivity/tests/test_kcomponents.py .sss...... [ 16%] networkx/algorithms/connectivity/tests/test_kcutsets.py s........s..... [ 16%] networkx/algorithms/connectivity/tests/test_stoer_wagner.py ..... [ 17%] networkx/algorithms/flow/tests/test_gomory_hu.py ....s.... [ 17%] networkx/algorithms/flow/tests/test_maxflow.py ......................... [ 17%] .. [ 17%] networkx/algorithms/flow/tests/test_maxflow_large_graph.py ...s.. [ 17%] networkx/algorithms/flow/tests/test_mincost.py ................... [ 18%] networkx/algorithms/flow/tests/test_networksimplex.py .................. [ 18%] .... [ 18%] networkx/algorithms/isomorphism/tests/test_ismags.py .......... [ 18%] networkx/algorithms/isomorphism/tests/test_isomorphism.py .... [ 18%] networkx/algorithms/isomorphism/tests/test_isomorphvf2.py .............. [ 19%] .. [ 19%] networkx/algorithms/isomorphism/tests/test_match_helpers.py .. [ 19%] networkx/algorithms/isomorphism/tests/test_temporalisomorphvf2.py ...... [ 19%] ...... [ 19%] networkx/algorithms/isomorphism/tests/test_tree_isomorphism.py ..... [ 19%] networkx/algorithms/isomorphism/tests/test_vf2pp.py .................... [ 20%] ........................ [ 20%] networkx/algorithms/isomorphism/tests/test_vf2pp_helpers.py ............ [ 20%] ................................. [ 21%] networkx/algorithms/isomorphism/tests/test_vf2userfunc.py .............. [ 21%] .............. [ 21%] networkx/algorithms/link_analysis/tests/test_hits.py ...... [ 22%] networkx/algorithms/link_analysis/tests/test_pagerank.py ............... [ 22%] ..................................... [ 23%] networkx/algorithms/minors/tests/test_contraction.py ................... [ 23%] ............ [ 23%] networkx/algorithms/operators/tests/test_all.py .................. [ 24%] networkx/algorithms/operators/tests/test_binary.py .................... [ 24%] networkx/algorithms/operators/tests/test_product.py .................... [ 24%] ........ [ 24%] networkx/algorithms/operators/tests/test_unary.py ... [ 25%] networkx/algorithms/shortest_paths/tests/test_astar.py ................ [ 25%] networkx/algorithms/shortest_paths/tests/test_dense.py ........ [ 25%] networkx/algorithms/shortest_paths/tests/test_dense_numpy.py ....... [ 25%] networkx/algorithms/shortest_paths/tests/test_generic.py ............... [ 25%] ........ [ 26%] networkx/algorithms/shortest_paths/tests/test_unweighted.py ............ [ 26%] ..... [ 26%] networkx/algorithms/shortest_paths/tests/test_weighted.py .............. [ 26%] ........................................ [ 27%] networkx/algorithms/tests/test_asteroidal.py . [ 27%] networkx/algorithms/tests/test_boundary.py ............. [ 27%] networkx/algorithms/tests/test_bridges.py .......... [ 27%] networkx/algorithms/tests/test_chains.py ..... [ 28%] networkx/algorithms/tests/test_chordal.py .......... [ 28%] networkx/algorithms/tests/test_clique.py ................ [ 28%] networkx/algorithms/tests/test_cluster.py .............................. [ 29%] ............. [ 29%] networkx/algorithms/tests/test_communicability.py .. [ 29%] networkx/algorithms/tests/test_core.py ............... [ 29%] networkx/algorithms/tests/test_covering.py ........... [ 29%] networkx/algorithms/tests/test_cuts.py ................. [ 30%] networkx/algorithms/tests/test_cycles.py ............................... [ 30%] ................. [ 31%] networkx/algorithms/tests/test_d_separation.py .............. [ 31%] networkx/algorithms/tests/test_dag.py .................................. [ 32%] .......................... [ 32%] networkx/algorithms/tests/test_distance_measures.py .................... [ 33%] .....................FFFFF......... [ 33%] networkx/algorithms/tests/test_distance_regular.py ....... [ 33%] networkx/algorithms/tests/test_dominance.py ...................... [ 34%] networkx/algorithms/tests/test_dominating.py ..... [ 34%] networkx/algorithms/tests/test_efficiency.py ....... [ 34%] networkx/algorithms/tests/test_euler.py .............................. [ 35%] networkx/algorithms/tests/test_graph_hashing.py ........................ [ 35%] [ 35%] networkx/algorithms/tests/test_graphical.py ............. [ 35%] networkx/algorithms/tests/test_hierarchy.py ..... [ 35%] networkx/algorithms/tests/test_hybrid.py .. [ 36%] networkx/algorithms/tests/test_isolate.py ... [ 36%] networkx/algorithms/tests/test_link_prediction.py ...................... [ 36%] ................................................... [ 37%] networkx/algorithms/tests/test_lowest_common_ancestors.py .............. [ 37%] ......................................... [ 38%] networkx/algorithms/tests/test_matching.py ............................. [ 39%] ................... [ 39%] networkx/algorithms/tests/test_max_weight_clique.py ..... [ 39%] networkx/algorithms/tests/test_mis.py ....... [ 39%] networkx/algorithms/tests/test_moral.py . [ 39%] networkx/algorithms/tests/test_node_classification.py ............... [ 40%] networkx/algorithms/tests/test_non_randomness.py ...... [ 40%] networkx/algorithms/tests/test_planar_drawing.py ............ [ 40%] networkx/algorithms/tests/test_planarity.py ............................ [ 40%] .. [ 41%] networkx/algorithms/tests/test_reciprocity.py ..... [ 41%] networkx/algorithms/tests/test_regular.py ............. [ 41%] networkx/algorithms/tests/test_richclub.py ......... [ 41%] networkx/algorithms/tests/test_similarity.py ........................... [ 42%] .................. [ 42%] networkx/algorithms/tests/test_simple_paths.py ......................... [ 42%] ................................................. [ 43%] networkx/algorithms/tests/test_smallworld.py ...... [ 43%] networkx/algorithms/tests/test_smetric.py .. [ 44%] networkx/algorithms/tests/test_sparsifiers.py ....... [ 44%] networkx/algorithms/tests/test_structuralholes.py ............. [ 44%] networkx/algorithms/tests/test_summarization.py ................. [ 44%] networkx/algorithms/tests/test_swap.py ..................... [ 45%] networkx/algorithms/tests/test_threshold.py .................. [ 45%] networkx/algorithms/tests/test_tournament.py ..................... [ 45%] networkx/algorithms/tests/test_triads.py ................ [ 46%] networkx/algorithms/tests/test_vitality.py ...... [ 46%] networkx/algorithms/tests/test_voronoi.py .......... [ 46%] networkx/algorithms/tests/test_wiener.py .... [ 46%] networkx/algorithms/traversal/tests/test_beamsearch.py ... [ 46%] networkx/algorithms/traversal/tests/test_bfs.py ................. [ 47%] networkx/algorithms/traversal/tests/test_dfs.py .................. [ 47%] networkx/algorithms/traversal/tests/test_edgebfs.py ................ [ 47%] networkx/algorithms/traversal/tests/test_edgedfs.py ............... [ 47%] networkx/algorithms/tree/tests/test_branchings.py ...................... [ 48%] ......... [ 48%] networkx/algorithms/tree/tests/test_coding.py .............. [ 48%] networkx/algorithms/tree/tests/test_decomposition.py ..... [ 48%] networkx/algorithms/tree/tests/test_mst.py ............................. [ 49%] ....................s.s [ 49%] networkx/algorithms/tree/tests/test_operations.py ... [ 50%] networkx/algorithms/tree/tests/test_recognition.py ..................... [ 50%] .... [ 50%] networkx/classes/tests/test_backends.py . [ 50%] networkx/classes/tests/test_coreviews.py ............................... [ 51%] ......................... [ 51%] networkx/classes/tests/test_digraph.py ................................. [ 52%] ................................................... [ 53%] networkx/classes/tests/test_digraph_historical.py ...................... [ 53%] .................... [ 54%] networkx/classes/tests/test_filters.py ........... [ 54%] networkx/classes/tests/test_function.py ................................ [ 54%] ....................................... [ 55%] networkx/classes/tests/test_graph.py ................................... [ 56%] ............................. [ 56%] networkx/classes/tests/test_graph_historical.py ........................ [ 57%] .......... [ 57%] networkx/classes/tests/test_graphviews.py .............................. [ 58%] ..... [ 58%] networkx/classes/tests/test_multidigraph.py ............................ [ 58%] ........................................................................ [ 60%] ........................................................................ [ 61%] ............... [ 61%] networkx/classes/tests/test_multigraph.py .............................. [ 62%] ........................................................................ [ 63%] ................................................... [ 64%] networkx/classes/tests/test_reportviews.py ............................. [ 65%] ........................................................................ [ 66%] ........................................................................ [ 68%] .................................................................... [ 69%] networkx/classes/tests/test_special.py ................................. [ 70%] ........................................................................ [ 71%] ........................................................................ [ 73%] ........................................................................ [ 74%] ........................................................................ [ 75%] ........................... [ 76%] networkx/classes/tests/test_subgraphviews.py ........................... [ 77%] ..... [ 77%] networkx/drawing/tests/test_latex.py ...... [ 77%] networkx/drawing/tests/test_layout.py .............................. [ 77%] networkx/drawing/tests/test_pydot.py xxx...... [ 78%] networkx/drawing/tests/test_pylab.py ................................... [ 78%] .............................................................. [ 79%] networkx/generators/tests/test_atlas.py ........ [ 80%] networkx/generators/tests/test_classic.py .............................. [ 80%] ....... [ 80%] networkx/generators/tests/test_cographs.py . [ 80%] networkx/generators/tests/test_community.py ...................... [ 81%] networkx/generators/tests/test_degree_seq.py ................... [ 81%] networkx/generators/tests/test_directed.py ............... [ 81%] networkx/generators/tests/test_duplication.py ....... [ 82%] networkx/generators/tests/test_ego.py .. [ 82%] networkx/generators/tests/test_expanders.py .......................... [ 82%] networkx/generators/tests/test_geometric.py ........................ [ 83%] networkx/generators/tests/test_harary_graph.py .. [ 83%] networkx/generators/tests/test_internet_as_graphs.py ..... [ 83%] networkx/generators/tests/test_intersection.py .... [ 83%] networkx/generators/tests/test_interval_graph.py ........ [ 83%] networkx/generators/tests/test_joint_degree_seq.py .... [ 83%] networkx/generators/tests/test_lattice.py ....................... [ 83%] networkx/generators/tests/test_line.py ................................. [ 84%] .. [ 84%] networkx/generators/tests/test_mycielski.py ... [ 84%] networkx/generators/tests/test_nonisomorphic_trees.py ..... [ 84%] networkx/generators/tests/test_random_clustered.py .... [ 84%] networkx/generators/tests/test_random_graphs.py ........................ [ 85%] ....................................... [ 86%] networkx/generators/tests/test_small.py ................................ [ 86%] ...... [ 86%] networkx/generators/tests/test_spectral_graph_forge.py . [ 86%] networkx/generators/tests/test_stochastic.py ....... [ 87%] networkx/generators/tests/test_sudoku.py ...... [ 87%] networkx/generators/tests/test_trees.py ......... [ 87%] networkx/generators/tests/test_triads.py .. [ 87%] networkx/linalg/tests/test_algebraic_connectivity.py ................... [ 87%] .........F...F...F...F....FF..............F...F...F...F...F....FF.... [ 89%] networkx/linalg/tests/test_attrmatrix.py ..... [ 89%] networkx/linalg/tests/test_bethehessian.py . [ 89%] networkx/linalg/tests/test_graphmatrix.py .... [ 89%] networkx/linalg/tests/test_laplacian.py .... [ 89%] networkx/linalg/tests/test_modularity.py ... [ 89%] networkx/linalg/tests/test_spectrum.py ..... [ 89%] networkx/readwrite/json_graph/tests/test_adjacency.py ...... [ 89%] networkx/readwrite/json_graph/tests/test_cytoscape.py ....... [ 89%] networkx/readwrite/json_graph/tests/test_node_link.py ............ [ 90%] networkx/readwrite/json_graph/tests/test_tree.py ... [ 90%] networkx/readwrite/tests/test_adjlist.py .................. [ 90%] networkx/readwrite/tests/test_edgelist.py .......................... [ 90%] networkx/readwrite/tests/test_gexf.py ..................... [ 91%] networkx/readwrite/tests/test_gml.py ....................... [ 91%] networkx/readwrite/tests/test_graph6.py ............................... [ 92%] networkx/readwrite/tests/test_graphml.py ............................... [ 93%] .............................. [ 93%] networkx/readwrite/tests/test_leda.py .. [ 93%] networkx/readwrite/tests/test_p2g.py ... [ 93%] networkx/readwrite/tests/test_pajek.py ........ [ 93%] networkx/readwrite/tests/test_sparse6.py ................ [ 94%] networkx/readwrite/tests/test_text.py .......................... [ 94%] networkx/tests/test_all_random_functions.py s [ 94%] networkx/tests/test_convert.py ............... [ 94%] networkx/tests/test_convert_numpy.py ................................... [ 95%] .......... [ 95%] networkx/tests/test_convert_pandas.py ...................... [ 96%] networkx/tests/test_convert_scipy.py .................... [ 96%] networkx/tests/test_exceptions.py ....... [ 96%] networkx/tests/test_import.py .. [ 96%] networkx/tests/test_lazy_imports.py .... [ 96%] networkx/tests/test_relabel.py .............................. [ 97%] networkx/utils/tests/test__init.py . [ 97%] networkx/utils/tests/test_decorators.py ................................ [ 98%] ... [ 98%] networkx/utils/tests/test_heaps.py .. [ 98%] networkx/utils/tests/test_mapped_queue.py .............................. [ 98%] ................ [ 99%] networkx/utils/tests/test_misc.py ............................... [ 99%] networkx/utils/tests/test_random_sequence.py .... [ 99%] networkx/utils/tests/test_rcm.py .. [ 99%] networkx/utils/tests/test_unionfind.py ..... [100%] =================================== FAILURES =================================== ____________________ TestFlowBetweennessCentrality.test_K4 _____________________ self = def test_K4(self): """Betweenness centrality: K4""" G = nx.complete_graph(4) for solver in ["full", "lu", "cg"]: > b = nx.current_flow_betweenness_centrality( G, normalized=False, solver=solver ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py:36: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 294:4: in argmap_current_flow_betweenness_centrality_291 ??? networkx/algorithms/centrality/current_flow_betweenness.py:227: in current_flow_betweenness_centrality for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _________________ TestFlowBetweennessCentrality.test_solvers2 __________________ self = def test_solvers2(self): """Betweenness centrality: alternate solvers""" G = nx.complete_graph(4) for solver in ["full", "lu", "cg"]: > b = nx.current_flow_betweenness_centrality( G, normalized=False, solver=solver ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py:72: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 294:4: in argmap_current_flow_betweenness_centrality_291 ??? networkx/algorithms/centrality/current_flow_betweenness.py:227: in current_flow_betweenness_centrality for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________ TestApproximateFlowBetweennessCentrality.test_solvers _____________ self = def test_solvers(self): "Approximate current-flow betweenness centrality: solvers" G = nx.complete_graph(4) epsilon = 0.1 for solver in ["full", "lu", "cg"]: > b = approximate_cfbc( G, normalized=False, solver=solver, epsilon=0.5 * epsilon ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py:130: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 299:5: in argmap_approximate_current_flow_betweenness_centrality_295 ??? networkx/algorithms/centrality/current_flow_betweenness.py:115: in approximate_current_flow_betweenness_centrality C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _______________ TestFlowBetweennessCentrality.test_K4_normalized _______________ self = def test_K4_normalized(self): """Betweenness centrality: K4""" G = nx.complete_graph(4) > b = nx.current_flow_betweenness_centrality_subset( G, list(G), list(G), normalized=True ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:17: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ networkx/utils/decorators.py:766: in func return argmap._lazy_compile(__wrapper)(*args, **kwargs) compilation 311:4: in argmap_current_flow_betweenness_centrality_subset_308 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:105: in current_flow_betweenness_centrality_subset for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________________ TestFlowBetweennessCentrality.test_K4 _____________________ self = def test_K4(self): """Betweenness centrality: K4""" G = nx.complete_graph(4) > b = nx.current_flow_betweenness_centrality_subset( G, list(G), list(G), normalized=True ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:27: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 311:4: in argmap_current_flow_betweenness_centrality_subset_308 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:105: in current_flow_betweenness_centrality_subset for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _______________ TestFlowBetweennessCentrality.test_P4_normalized _______________ self = def test_P4_normalized(self): """Betweenness centrality: P4 normalized""" G = nx.path_graph(4) > b = nx.current_flow_betweenness_centrality_subset( G, list(G), list(G), normalized=True ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:58: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 311:4: in argmap_current_flow_betweenness_centrality_subset_308 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:105: in current_flow_betweenness_centrality_subset for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________________ TestFlowBetweennessCentrality.test_P4 _____________________ self = def test_P4(self): """Betweenness centrality: P4""" G = nx.path_graph(4) > b = nx.current_flow_betweenness_centrality_subset( G, list(G), list(G), normalized=True ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:68: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 311:4: in argmap_current_flow_betweenness_centrality_subset_308 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:105: in current_flow_betweenness_centrality_subset for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ___________________ TestFlowBetweennessCentrality.test_star ____________________ self = def test_star(self): """Betweenness centrality: star""" G = nx.Graph() nx.add_star(G, ["a", "b", "c", "d"]) > b = nx.current_flow_betweenness_centrality_subset( G, list(G), list(G), normalized=True ) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:79: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 311:4: in argmap_current_flow_betweenness_centrality_subset_308 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:105: in current_flow_betweenness_centrality_subset for row, (s, t) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _____________ TestEdgeFlowBetweennessCentrality.test_K4_normalized _____________ self = def test_K4_normalized(self): """Betweenness centrality: K4""" G = nx.complete_graph(4) > b = edge_current_flow_subset(G, list(G), list(G), normalized=True) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:95: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ networkx/utils/decorators.py:766: in func return argmap._lazy_compile(__wrapper)(*args, **kwargs) compilation 315:4: in argmap_edge_current_flow_betweenness_centrality_subset_312 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:217: in edge_current_flow_betweenness_centrality_subset for row, (e) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________________ TestEdgeFlowBetweennessCentrality.test_K4 ___________________ self = def test_K4(self): """Betweenness centrality: K4""" G = nx.complete_graph(4) > b = edge_current_flow_subset(G, list(G), list(G), normalized=False) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:104: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 315:4: in argmap_edge_current_flow_betweenness_centrality_subset_312 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:217: in edge_current_flow_betweenness_centrality_subset for row, (e) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________________ TestEdgeFlowBetweennessCentrality.test_C4 ___________________ self = def test_C4(self): """Edge betweenness centrality: C4""" G = nx.cycle_graph(4) > b = edge_current_flow_subset(G, list(G), list(G), normalized=True) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:134: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 315:4: in argmap_edge_current_flow_betweenness_centrality_subset_312 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:217: in edge_current_flow_betweenness_centrality_subset for row, (e) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________________ TestEdgeFlowBetweennessCentrality.test_P4 ___________________ self = def test_P4(self): """Edge betweenness centrality: P4""" G = nx.path_graph(4) > b = edge_current_flow_subset(G, list(G), list(G), normalized=True) networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:143: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 315:4: in argmap_edge_current_flow_betweenness_centrality_subset_312 ??? networkx/algorithms/centrality/current_flow_betweenness_subset.py:217: in edge_current_flow_betweenness_centrality_subset for row, (e) in flow_matrix_row(H, weight=weight, dtype=dtype, solver=solver): networkx/algorithms/centrality/flow_matrix.py:18: in flow_matrix_row C = solvername[solver](L, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _____________________ TestFlowClosenessCentrality.test_K4 ______________________ self = def test_K4(self): """Closeness centrality: K4""" G = nx.complete_graph(4) > b = nx.current_flow_closeness_centrality(G) networkx/algorithms/centrality/tests/test_current_flow_closeness.py:13: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ networkx/utils/decorators.py:766: in func return argmap._lazy_compile(__wrapper)(*args, **kwargs) compilation 319:4: in argmap_current_flow_closeness_centrality_316 ??? networkx/algorithms/centrality/current_flow_closeness.py:85: in current_flow_closeness_centrality C2 = solvername[solver](L, width=1, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _____________________ TestFlowClosenessCentrality.test_P4 ______________________ self = def test_P4(self): """Closeness centrality: P4""" G = nx.path_graph(4) > b = nx.current_flow_closeness_centrality(G) networkx/algorithms/centrality/tests/test_current_flow_closeness.py:21: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 319:4: in argmap_current_flow_closeness_centrality_316 ??? networkx/algorithms/centrality/current_flow_closeness.py:85: in current_flow_closeness_centrality C2 = solvername[solver](L, width=1, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________________ TestFlowClosenessCentrality.test_star _____________________ self = def test_star(self): """Closeness centrality: star""" G = nx.Graph() nx.add_star(G, ["a", "b", "c", "d"]) > b = nx.current_flow_closeness_centrality(G) networkx/algorithms/centrality/tests/test_current_flow_closeness.py:30: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 319:4: in argmap_current_flow_closeness_centrality_316 ??? networkx/algorithms/centrality/current_flow_closeness.py:85: in current_flow_closeness_centrality C2 = solvername[solver](L, width=1, dtype=dtype) # initialize solver networkx/algorithms/centrality/flow_matrix.py:49: in __init__ self.init_solver(L) networkx/algorithms/centrality/flow_matrix.py:100: in init_solver self.lusolve = sp.sparse.linalg.factorized(self.L1.tocsc()) /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:576: in factorized return splu(A).solve _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _______________ TestResistanceDistance.test_resistance_distance ________________ self = def test_resistance_distance(self): > rd = nx.resistance_distance(self.G, 1, 3, "weight", True) networkx/algorithms/tests/test_distance_measures.py:339: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ networkx/utils/decorators.py:766: in func return argmap._lazy_compile(__wrapper)(*args, **kwargs) compilation 986:4: in argmap_resistance_distance_983 ??? networkx/algorithms/distance_measures.py:745: in resistance_distance lu_a = sp.sparse.linalg.splu(L_a, options={"SymmetricMode": True}) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________ TestResistanceDistance.test_resistance_distance_noinv _____________ self = def test_resistance_distance_noinv(self): > rd = nx.resistance_distance(self.G, 1, 3, "weight", False) networkx/algorithms/tests/test_distance_measures.py:344: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 986:4: in argmap_resistance_distance_983 ??? networkx/algorithms/distance_measures.py:745: in resistance_distance lu_a = sp.sparse.linalg.splu(L_a, options={"SymmetricMode": True}) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________ TestResistanceDistance.test_resistance_distance_no_weight ___________ self = def test_resistance_distance_no_weight(self): > rd = nx.resistance_distance(self.G, 1, 3) networkx/algorithms/tests/test_distance_measures.py:349: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 986:4: in argmap_resistance_distance_983 ??? networkx/algorithms/distance_measures.py:745: in resistance_distance lu_a = sp.sparse.linalg.splu(L_a, options={"SymmetricMode": True}) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________ TestResistanceDistance.test_resistance_distance_neg_weight __________ self = def test_resistance_distance_neg_weight(self): self.G[2][3]["weight"] = -4 > rd = nx.resistance_distance(self.G, 1, 3, "weight", True) networkx/algorithms/tests/test_distance_measures.py:354: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 986:4: in argmap_resistance_distance_983 ??? networkx/algorithms/distance_measures.py:745: in resistance_distance lu_a = sp.sparse.linalg.splu(L_a, options={"SymmetricMode": True}) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ____________________ TestResistanceDistance.test_multigraph ____________________ self = def test_multigraph(self): G = nx.MultiGraph() G.add_edge(1, 2, weight=2) G.add_edge(2, 3, weight=4) G.add_edge(3, 4, weight=1) G.add_edge(1, 4, weight=3) > rd = nx.resistance_distance(G, 1, 3, "weight", True) networkx/algorithms/tests/test_distance_measures.py:364: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 986:4: in argmap_resistance_distance_983 ??? networkx/algorithms/distance_measures.py:745: in resistance_distance lu_a = sp.sparse.linalg.splu(L_a, options={"SymmetricMode": True}) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 7 stored elements in Compressed Sparse Column format> permc_spec = None, diag_pivot_thresh = None, relax = None, panel_size = None options = {'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _______________ TestAlgebraicConnectivity.test_path[tracemin_lu] _______________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_path(self, method): pytest.importorskip("scipy") G = nx.path_graph(8) A = nx.laplacian_matrix(G) sigma = 2 - sqrt(2 + sqrt(2)) > ac = nx.algebraic_connectivity(G, tol=1e-12, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:152: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <8x8 sparse array of type '' with 22 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ___ TestAlgebraicConnectivity.test_problematic_graph_issue_2381[tracemin_lu] ___ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_problematic_graph_issue_2381(self, method): pytest.importorskip("scipy") G = nx.path_graph(4) G.add_edges_from([(4, 2), (5, 1)]) A = nx.laplacian_matrix(G) sigma = 0.438447187191 > ac = nx.algebraic_connectivity(G, tol=1e-12, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:164: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <6x6 sparse array of type '' with 16 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ______________ TestAlgebraicConnectivity.test_cycle[tracemin_lu] _______________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_cycle(self, method): pytest.importorskip("scipy") G = nx.cycle_graph(8) A = nx.laplacian_matrix(G) sigma = 2 - sqrt(2) > ac = nx.algebraic_connectivity(G, tol=1e-12, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:175: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <8x8 sparse array of type '' with 24 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError __________ TestAlgebraicConnectivity.test_seed_argument[tracemin_lu] ___________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_seed_argument(self, method): pytest.importorskip("scipy") G = nx.cycle_graph(8) A = nx.laplacian_matrix(G) sigma = 2 - sqrt(2) > ac = nx.algebraic_connectivity(G, tol=1e-12, method=method, seed=1) networkx/linalg/tests/test_algebraic_connectivity.py:186: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <8x8 sparse array of type '' with 24 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _ TestAlgebraicConnectivity.test_buckminsterfullerene[tracemin_lu-False-0.2434017461399311-laplacian_matrix] _ self = normalized = False, sigma = 0.2434017461399311 laplacian_fn = method = 'tracemin_lu' @pytest.mark.parametrize( ("normalized", "sigma", "laplacian_fn"), ( (False, 0.2434017461399311, nx.laplacian_matrix), (True, 0.08113391537997749, nx.normalized_laplacian_matrix), ), ) @pytest.mark.parametrize("method", methods) def test_buckminsterfullerene(self, normalized, sigma, laplacian_fn, method): pytest.importorskip("scipy") G = nx.Graph( [ (1, 10), (1, 41), (1, 59), (2, 12), (2, 42), (2, 60), (3, 6), (3, 43), (3, 57), (4, 8), (4, 44), (4, 58), (5, 13), (5, 56), (5, 57), (6, 10), (6, 31), (7, 14), (7, 56), (7, 58), (8, 12), (8, 32), (9, 23), (9, 53), (9, 59), (10, 15), (11, 24), (11, 53), (11, 60), (12, 16), (13, 14), (13, 25), (14, 26), (15, 27), (15, 49), (16, 28), (16, 50), (17, 18), (17, 19), (17, 54), (18, 20), (18, 55), (19, 23), (19, 41), (20, 24), (20, 42), (21, 31), (21, 33), (21, 57), (22, 32), (22, 34), (22, 58), (23, 24), (25, 35), (25, 43), (26, 36), (26, 44), (27, 51), (27, 59), (28, 52), (28, 60), (29, 33), (29, 34), (29, 56), (30, 51), (30, 52), (30, 53), (31, 47), (32, 48), (33, 45), (34, 46), (35, 36), (35, 37), (36, 38), (37, 39), (37, 49), (38, 40), (38, 50), (39, 40), (39, 51), (40, 52), (41, 47), (42, 48), (43, 49), (44, 50), (45, 46), (45, 54), (46, 55), (47, 54), (48, 55), ] ) A = laplacian_fn(G) try: > assert nx.algebraic_connectivity( G, normalized=normalized, tol=1e-12, method=method ) == pytest.approx(sigma, abs=1e-7) networkx/linalg/tests/test_algebraic_connectivity.py:297: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <60x60 sparse array of type '' with 240 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _ TestAlgebraicConnectivity.test_buckminsterfullerene[tracemin_lu-True-0.0811339153799775-normalized_laplacian_matrix] _ self = normalized = True, sigma = 0.0811339153799775 laplacian_fn = method = 'tracemin_lu' @pytest.mark.parametrize( ("normalized", "sigma", "laplacian_fn"), ( (False, 0.2434017461399311, nx.laplacian_matrix), (True, 0.08113391537997749, nx.normalized_laplacian_matrix), ), ) @pytest.mark.parametrize("method", methods) def test_buckminsterfullerene(self, normalized, sigma, laplacian_fn, method): pytest.importorskip("scipy") G = nx.Graph( [ (1, 10), (1, 41), (1, 59), (2, 12), (2, 42), (2, 60), (3, 6), (3, 43), (3, 57), (4, 8), (4, 44), (4, 58), (5, 13), (5, 56), (5, 57), (6, 10), (6, 31), (7, 14), (7, 56), (7, 58), (8, 12), (8, 32), (9, 23), (9, 53), (9, 59), (10, 15), (11, 24), (11, 53), (11, 60), (12, 16), (13, 14), (13, 25), (14, 26), (15, 27), (15, 49), (16, 28), (16, 50), (17, 18), (17, 19), (17, 54), (18, 20), (18, 55), (19, 23), (19, 41), (20, 24), (20, 42), (21, 31), (21, 33), (21, 57), (22, 32), (22, 34), (22, 58), (23, 24), (25, 35), (25, 43), (26, 36), (26, 44), (27, 51), (27, 59), (28, 52), (28, 60), (29, 33), (29, 34), (29, 56), (30, 51), (30, 52), (30, 53), (31, 47), (32, 48), (33, 45), (34, 46), (35, 36), (35, 37), (36, 38), (37, 39), (37, 49), (38, 40), (38, 50), (39, 40), (39, 51), (40, 52), (41, 47), (42, 48), (43, 49), (44, 50), (45, 46), (45, 54), (46, 55), (47, 54), (48, 55), ] ) A = laplacian_fn(G) try: > assert nx.algebraic_connectivity( G, normalized=normalized, tol=1e-12, method=method ) == pytest.approx(sigma, abs=1e-7) networkx/linalg/tests/test_algebraic_connectivity.py:297: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1469:5: in argmap_algebraic_connectivity_1465 ??? networkx/linalg/algebraicconnectivity.py:411: in algebraic_connectivity sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <60x60 sparse array of type '' with 240 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ______________ TestSpectralOrdering.test_three_nodes[tracemin_lu] ______________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_three_nodes(self, method): pytest.importorskip("scipy") G = nx.Graph() G.add_weighted_edges_from([(1, 2, 1), (1, 3, 2), (2, 3, 1)], weight="spam") > order = nx.spectral_ordering(G, weight="spam", method=method) networkx/linalg/tests/test_algebraic_connectivity.py:336: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ________ TestSpectralOrdering.test_three_nodes_multigraph[tracemin_lu] _________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_three_nodes_multigraph(self, method): pytest.importorskip("scipy") G = nx.MultiDiGraph() G.add_weighted_edges_from([(1, 2, 1), (1, 3, 2), (2, 3, 1), (2, 3, 2)]) > order = nx.spectral_ordering(G, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:345: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <3x3 sparse array of type '' with 9 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _________________ TestSpectralOrdering.test_path[tracemin_lu] __________________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_path(self, method): pytest.importorskip("scipy") path = list(range(10)) np.random.shuffle(path) G = nx.Graph() nx.add_path(G, path) > order = nx.spectral_ordering(G, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:356: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <10x10 sparse array of type '' with 28 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _____________ TestSpectralOrdering.test_seed_argument[tracemin_lu] _____________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_seed_argument(self, method): pytest.importorskip("scipy") path = list(range(10)) np.random.shuffle(path) G = nx.Graph() nx.add_path(G, path) > order = nx.spectral_ordering(G, method=method, seed=1) networkx/linalg/tests/test_algebraic_connectivity.py:366: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <10x10 sparse array of type '' with 28 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError _____________ TestSpectralOrdering.test_disconnected[tracemin_lu] ______________ self = method = 'tracemin_lu' @pytest.mark.parametrize("method", methods) def test_disconnected(self, method): pytest.importorskip("scipy") G = nx.Graph() nx.add_path(G, range(0, 10, 2)) nx.add_path(G, range(1, 10, 2)) > order = nx.spectral_ordering(G, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:375: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <5x5 sparse array of type '' with 13 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ______ TestSpectralOrdering.test_cycle[tracemin_lu-False-expected_order0] ______ self = normalized = False expected_order = [[1, 2, 0, 3, 4, 5, ...], [8, 7, 9, 6, 5, 4, ...]] method = 'tracemin_lu' @pytest.mark.parametrize( ("normalized", "expected_order"), ( (False, [[1, 2, 0, 3, 4, 5, 6, 9, 7, 8], [8, 7, 9, 6, 5, 4, 3, 0, 2, 1]]), (True, [[1, 2, 3, 0, 4, 5, 9, 6, 7, 8], [8, 7, 6, 9, 5, 4, 0, 3, 2, 1]]), ), ) @pytest.mark.parametrize("method", methods) def test_cycle(self, normalized, expected_order, method): pytest.importorskip("scipy") path = list(range(10)) G = nx.Graph() nx.add_path(G, path, weight=5) G.add_edge(path[-1], path[0], weight=1) A = nx.laplacian_matrix(G).todense() > order = nx.spectral_ordering(G, normalized=normalized, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:401: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <10x10 sparse array of type '' with 30 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError ______ TestSpectralOrdering.test_cycle[tracemin_lu-True-expected_order1] _______ self = normalized = True expected_order = [[1, 2, 3, 0, 4, 5, ...], [8, 7, 6, 9, 5, 4, ...]] method = 'tracemin_lu' @pytest.mark.parametrize( ("normalized", "expected_order"), ( (False, [[1, 2, 0, 3, 4, 5, 6, 9, 7, 8], [8, 7, 9, 6, 5, 4, 3, 0, 2, 1]]), (True, [[1, 2, 3, 0, 4, 5, 9, 6, 7, 8], [8, 7, 6, 9, 5, 4, 0, 3, 2, 1]]), ), ) @pytest.mark.parametrize("method", methods) def test_cycle(self, normalized, expected_order, method): pytest.importorskip("scipy") path = list(range(10)) G = nx.Graph() nx.add_path(G, path, weight=5) G.add_edge(path[-1], path[0], weight=1) A = nx.laplacian_matrix(G).todense() > order = nx.spectral_ordering(G, normalized=normalized, method=method) networkx/linalg/tests/test_algebraic_connectivity.py:401: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compilation 1478:4: in argmap_spectral_ordering_1475 ??? networkx/linalg/algebraicconnectivity.py:586: in spectral_ordering sigma, fiedler = find_fiedler(L, x, normalized, tol, seed) networkx/linalg/algebraicconnectivity.py:274: in find_fiedler sigma, X = _tracemin_fiedler(L, X, normalized, tol, method) networkx/linalg/algebraicconnectivity.py:231: in _tracemin_fiedler solver = _LUSolver(A) networkx/linalg/algebraicconnectivity.py:94: in __init__ self._LU = sp.sparse.linalg.splu( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A = <10x10 sparse array of type '' with 30 stored elements in Compressed Sparse Column format> permc_spec = 'MMD_AT_PLUS_A', diag_pivot_thresh = 0.0, relax = None panel_size = None, options = {'Equil': True, 'SymmetricMode': True} def splu(A, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=dict()): """ Compute the LU decomposition of a sparse, square matrix. Parameters ---------- A : sparse matrix Sparse matrix to factorize. Most efficient when provided in CSC format. Other formats will be converted to CSC before factorization. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD') - ``NATURAL``: natural ordering. - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. - ``COLAMD``: approximate minimum degree column ordering diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details [1]_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details [1]_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details [1]_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide [1]_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement. Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method. See also -------- spilu : incomplete LU decomposition Notes ----- This function uses the SuperLU library. References ---------- .. [1] SuperLU https://portal.nersc.gov/project/sparse/superlu/ Examples -------- >>> import numpy as np >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([[1., 0., 0.], [5., 0., 2.], [0., -1., 0.]], dtype=float) >>> B = splu(A) >>> x = np.array([1., 2., 3.], dtype=float) >>> B.solve(x) array([ 1. , -3. , -1.5]) >>> A.dot(B.solve(x)) array([ 1., 2., 3.]) >>> B.solve(A.dot(x)) array([ 1., 2., 3.]) """ if is_pydata_spmatrix(A): def csc_construct_func(*a, cls=type(A)): return cls(csc_matrix(*a)) A = A.to_scipy_sparse().tocsc() else: csc_construct_func = csc_matrix if not (issparse(A) and A.format == "csc"): A = csc_matrix(A) warn('splu converted its input to CSC format', SparseEfficiencyWarning) # sum duplicates for non-canonical format A.sum_duplicates() A = A._asfptype() # upcast to a floating point format M, N = A.shape if (M != N): raise ValueError("can only factor square matrices") # is this true? _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, PanelSize=panel_size, Relax=relax) if options is not None: _options.update(options) # Ensure that no column permutations are applied if (_options["ColPerm"] == "NATURAL"): _options["SymmetricMode"] = True > return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, csc_construct_func=csc_construct_func, ilu=False, options=_options) E TypeError: rowind and colptr must be of type cint /usr/lib/python3.11/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:414: TypeError =============================== warnings summary =============================== networkx/drawing/tests/test_pylab.py:422 /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:422: PytestUnknownMarkWarning: Unknown pytest.mark.mpl_image_compare - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html @pytest.mark.mpl_image_compare networkx/readwrite/tests/test_gml.py:556 /build/python-networkx/src/networkx-networkx-3.1/networkx/readwrite/tests/test_gml.py:556: DeprecationWarning: invalid octal escape sequence '\420' "graph [edge [ source u'u\4200' target u'u\4200' ] " networkx/readwrite/tests/test_gml.py:557 /build/python-networkx/src/networkx-networkx-3.1/networkx/readwrite/tests/test_gml.py:557: DeprecationWarning: invalid octal escape sequence '\420' + "node [ id u'u\4200' label b ] ]" networkx/algorithms/centrality/tests/test_katz_centrality.py: 53 warnings /build/python-networkx/src/networkx-networkx-3.1/networkx/algorithms/centrality/katz.py:333: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.) centrality = dict(zip(nodelist, map(float, centrality / norm))) networkx/drawing/tests/test_pylab.py::test_edge_colormap networkx/drawing/tests/test_pylab.py::test_multigraph_edgelist_tuples networkx/drawing/tests/test_pylab.py::test_multigraph_edgelist_tuples /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/nx_pylab.py:304: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. draw_networkx_edges(G, pos, arrows=arrows, **edge_kwds) networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-None-black] networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-r-red] networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-edge_color2-red] networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-edge_color4-yellow] networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-edge_color6-lime] networkx/drawing/tests/test_pylab.py::test_single_edge_color_undirected[edgelist1-edge_color8-blue] /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:88: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-None-black] networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-r-red] networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-edge_color2-red] networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-edge_color4-yellow] networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-edge_color6-lime] networkx/drawing/tests/test_pylab.py::test_single_edge_color_directed[edgelist1-edge_color8-blue] /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:114: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_edge_color_tuple_interpretation /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:155: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_edge_color_tuple_interpretation /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:165: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_edge_width_default_value[Graph] networkx/drawing/tests/test_pylab.py::test_edge_width_default_value[DiGraph] /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:235: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges(G, pos) networkx/drawing/tests/test_pylab.py::test_edge_color_with_edge_vmin_vmax /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:295: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges(G, pos, edge_color=[0, 1.0]) networkx/drawing/tests/test_pylab.py::test_edge_color_with_edge_vmin_vmax /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:298: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. drawn_edges = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_house_with_colors /usr/lib/python3.11/site-packages/_pytest/python.py:198: PytestReturnNotNoneWarning: Expected None, but networkx/drawing/tests/test_pylab.py::test_house_with_colors returned
, which will be an error in a future version of pytest. Did you mean to use `assert` instead of `return`? warnings.warn( networkx/drawing/tests/test_pylab.py::test_draw_edges_min_source_target_margins[o] networkx/drawing/tests/test_pylab.py::test_draw_edges_min_source_target_margins[s] /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:571: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. default_patch = nx.draw_networkx_edges(G, pos, ax=ax, node_shape=node_shape)[0] networkx/drawing/tests/test_pylab.py::test_draw_edges_min_source_target_margins[o] networkx/drawing/tests/test_pylab.py::test_draw_edges_min_source_target_margins[s] /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:575: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. padded_patch = nx.draw_networkx_edges( networkx/drawing/tests/test_pylab.py::test_nonzero_selfloop_with_single_node /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:601: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. patch = nx.draw_networkx_edges(G, {0: (0, 0)})[0] networkx/drawing/tests/test_pylab.py::test_nonzero_selfloop_with_single_edge_in_edgelist /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:620: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. patch = nx.draw_networkx_edges(G, pos, edgelist=[(1, 1)])[0] networkx/drawing/tests/test_pylab.py::test_draw_networkx_edges_undirected_selfloop_colors /build/python-networkx/src/networkx-networkx-3.1/networkx/drawing/tests/test_pylab.py:742: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead. nx.draw_networkx_edges(G, pos, ax=ax, edgelist=edgelist, edge_color=edge_colors) networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[lobpcg-False-expected_order0] /build/python-networkx/src/networkx-networkx-3.1/networkx/linalg/algebraicconnectivity.py:309: UserWarning: Exited at iteration 10 with accuracies [0.02743716] not reaching the requested tolerance 1e-08. Use iteration 11 instead with accuracy 0.027437158685215082. sigma, X = sp.sparse.linalg.lobpcg( networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[lobpcg-False-expected_order0] /build/python-networkx/src/networkx-networkx-3.1/networkx/linalg/algebraicconnectivity.py:309: UserWarning: Exited postprocessing with accuracies [0.02743716] not reaching the requested tolerance 1e-08. sigma, X = sp.sparse.linalg.lobpcg( networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[lobpcg-True-expected_order1] /build/python-networkx/src/networkx-networkx-3.1/networkx/linalg/algebraicconnectivity.py:309: UserWarning: Exited at iteration 10 with accuracies [0.00056623] not reaching the requested tolerance 1e-08. Use iteration 11 instead with accuracy 0.0005662307712154435. sigma, X = sp.sparse.linalg.lobpcg( networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[lobpcg-True-expected_order1] /build/python-networkx/src/networkx-networkx-3.1/networkx/linalg/algebraicconnectivity.py:309: UserWarning: Exited postprocessing with accuracies [0.00056623] not reaching the requested tolerance 1e-08. sigma, X = sp.sparse.linalg.lobpcg( networkx/tests/test_convert_pandas.py::TestConvertPandas::test_exceptions networkx/tests/test_convert_pandas.py::TestConvertPandas::test_exceptions /usr/lib/python3.11/site-packages/pandas/core/dtypes/cast.py:1641: DeprecationWarning: np.find_common_type is deprecated. Please use `np.result_type` or `np.promote_types`. See https://numpy.org/devdocs/release/1.25.0-notes.html and the docs for more information. (Deprecated NumPy 1.25) return np.find_common_type(types, []) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py::TestFlowBetweennessCentrality::test_K4 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py::TestFlowBetweennessCentrality::test_solvers2 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py::TestApproximateFlowBetweennessCentrality::test_solvers FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestFlowBetweennessCentrality::test_K4_normalized FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestFlowBetweennessCentrality::test_K4 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestFlowBetweennessCentrality::test_P4_normalized FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestFlowBetweennessCentrality::test_P4 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestFlowBetweennessCentrality::test_star FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestEdgeFlowBetweennessCentrality::test_K4_normalized FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestEdgeFlowBetweennessCentrality::test_K4 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestEdgeFlowBetweennessCentrality::test_C4 FAILED networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py::TestEdgeFlowBetweennessCentrality::test_P4 FAILED networkx/algorithms/centrality/tests/test_current_flow_closeness.py::TestFlowClosenessCentrality::test_K4 FAILED networkx/algorithms/centrality/tests/test_current_flow_closeness.py::TestFlowClosenessCentrality::test_P4 FAILED networkx/algorithms/centrality/tests/test_current_flow_closeness.py::TestFlowClosenessCentrality::test_star FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_noinv FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_no_weight FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_neg_weight FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_multigraph FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_path[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_problematic_graph_issue_2381[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_cycle[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_seed_argument[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_buckminsterfullerene[tracemin_lu-False-0.2434017461399311-laplacian_matrix] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestAlgebraicConnectivity::test_buckminsterfullerene[tracemin_lu-True-0.0811339153799775-normalized_laplacian_matrix] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_three_nodes[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_three_nodes_multigraph[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_path[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_seed_argument[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_disconnected[tracemin_lu] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[tracemin_lu-False-expected_order0] FAILED networkx/linalg/tests/test_algebraic_connectivity.py::TestSpectralOrdering::test_cycle[tracemin_lu-True-expected_order1] = 33 failed, 5059 passed, 13 skipped, 3 xfailed, 91 warnings in 1306.18s (0:21:46) = ==> ERROR: A failure occurred in check().  Aborting... ==> ERROR: Build failed, check /var/lib/archbuild/extra-riscv64/root2/build receiving incremental file list python-networkx-3.1-1-riscv64-build.log python-networkx-3.1-1-riscv64-check.log sent 62 bytes received 16,862 bytes 11,282.67 bytes/sec total size is 289,483 speedup is 17.10