View model summaries in PyTorch!

Overview

torchinfo

Python 3.7+ PyPI version Conda version Build Status pre-commit.ci status GitHub license codecov Downloads

(formerly torch-summary)

Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow's model.summary() API to view the visualization of the model, which is helpful while debugging your network. In this project, we implement a similar functionality in PyTorch and create a clean, simple interface to use in your projects.

This is a completely rewritten version of the original torchsummary and torchsummaryX projects by @sksq96 and @nmhkahn. This project addresses all of the issues and pull requests left on the original projects by introducing a completely new API.

Usage

pip install torchinfo

Alternatively, via conda:

conda install -c conda-forge torchinfo

How To Use

from torchinfo import summary

model = ConvNet()
batch_size = 16
summary(model, input_size=(batch_size, 1, 28, 28))
================================================================================================================
Layer (type:depth-idx)          Input Shape          Output Shape         Param #            Mult-Adds
================================================================================================================
SingleInputNet                  --                   --                   --                  --
├─Conv2d: 1-1                   [7, 1, 28, 28]       [7, 10, 24, 24]      260                1,048,320
├─Conv2d: 1-2                   [7, 10, 12, 12]      [7, 20, 8, 8]        5,020              2,248,960
├─Dropout2d: 1-3                [7, 20, 8, 8]        [7, 20, 8, 8]        --                 --
├─Linear: 1-4                   [7, 320]             [7, 50]              16,050             112,350
├─Linear: 1-5                   [7, 50]              [7, 10]              510                3,570
================================================================================================================
Total params: 21,840
Trainable params: 21,840
Non-trainable params: 0
Total mult-adds (M): 3.41
================================================================================================================
Input size (MB): 0.02
Forward/backward pass size (MB): 0.40
Params size (MB): 0.09
Estimated Total Size (MB): 0.51
================================================================================================================

Note: if you are using a Jupyter Notebook or Google Colab, summary(model, ...) must be the returned value of the cell. If it is not, you should wrap the summary in a print(), e.g. print(summary(model, ...)). See tests/jupyter_test.ipynb for examples.

This version now supports:

  • RNNs, LSTMs, and other recursive layers
  • Branching output used to explore model layers using specified depths
  • Returns ModelStatistics object containing all summary data fields
  • Configurable rows/columns
  • Jupyter Notebook / Google Colab

Other new features:

  • Verbose mode to show weights and bias layers
  • Accepts either input data or simply the input shape!
  • Customizable line widths and batch dimension
  • Comprehensive unit/output testing, linting, and code coverage testing

Community Contributions:

  • Sequentials & ModuleLists (thanks to @roym899)
  • Improved Mult-Add calculations (thanks to @TE-StefanUhlich, @zmzhang2000)
  • Dict/Misc input data (thanks to @e-dorigatti)
  • Pruned layer support (thanks to @MajorCarrot)

Documentation

def summary(
    model: nn.Module,
    input_size: Optional[INPUT_SIZE_TYPE] = None,
    input_data: Optional[INPUT_DATA_TYPE] = None,
    batch_dim: Optional[int] = None,
    cache_forward_pass: Optional[bool] = None,
    col_names: Optional[Iterable[str]] = None,
    col_width: int = 25,
    depth: int = 3,
    device: Optional[torch.device] = None,
    dtypes: Optional[List[torch.dtype]] = None,
    row_settings: Optional[Iterable[str]] = None,
    verbose: int = 1,
    **kwargs: Any,
) -> ModelStatistics:
"""
Summarize the given PyTorch model. Summarized information includes:
    1) Layer names,
    2) input/output shapes,
    3) kernel shape,
    4) # of parameters,
    5) # of operations (Mult-Adds)

NOTE: If neither input_data or input_size are provided, no forward pass through the
network is performed, and the provided model information is limited to layer names.

Args:
    model (nn.Module):
            PyTorch model to summarize. The model should be fully in either train()
            or eval() mode. If layers are not all in the same mode, running summary
            may have side effects on batchnorm or dropout statistics. If you
            encounter an issue with this, please open a GitHub issue.

    input_size (Sequence of Sizes):
            Shape of input data as a List/Tuple/torch.Size
            (dtypes must match model input, default is FloatTensors).
            You should include batch size in the tuple.
            Default: None

    input_data (Sequence of Tensors):
            Arguments for the model's forward pass (dtypes inferred).
            If the forward() function takes several parameters, pass in a list of
            args or a dict of kwargs (if your forward() function takes in a dict
            as its only argument, wrap it in a list).
            Default: None

    batch_dim (int):
            Batch_dimension of input data. If batch_dim is None, assume
            input_data / input_size contains the batch dimension, which is used
            in all calculations. Else, expand all tensors to contain the batch_dim.
            Specifying batch_dim can be an runtime optimization, since if batch_dim
            is specified, torchinfo uses a batch size of 1 for the forward pass.
            Default: None

    cache_forward_pass (bool):
            If True, cache the run of the forward() function using the model
            class name as the key. If the forward pass is an expensive operation,
            this can make it easier to modify the formatting of your model
            summary, e.g. changing the depth or enabled column types, especially
            in Jupyter Notebooks.
            WARNING: Modifying the model architecture or input data/input size when
            this feature is enabled does not invalidate the cache or re-run the
            forward pass, and can cause incorrect summaries as a result.
            Default: False

    col_names (Iterable[str]):
            Specify which columns to show in the output. Currently supported: (
                "input_size",
                "output_size",
                "num_params",
                "kernel_size",
                "mult_adds",
            )
            Default: ("output_size", "num_params")
            If input_data / input_size are not provided, only "num_params" is used.

    col_width (int):
            Width of each column.
            Default: 25

    depth (int):
            Depth of nested layers to display (e.g. Sequentials).
            Nested layers below this depth will not be displayed in the summary.
            Default: 3

    device (torch.Device):
            Uses this torch device for model and input_data.
            If not specified, uses result of torch.cuda.is_available().
            Default: None

    dtypes (List[torch.dtype]):
            If you use input_size, torchinfo assumes your input uses FloatTensors.
            If your model use a different data type, specify that dtype.
            For multiple inputs, specify the size of both inputs, and
            also specify the types of each parameter here.
            Default: None

    row_settings (Iterable[str]):
            Specify which features to show in a row. Currently supported: (
                "ascii_only",
                "depth",
                "var_names",
            )
            Default: ("depth",)

    verbose (int):
            0 (quiet): No output
            1 (default): Print model summary
            2 (verbose): Show weight and bias layers in full detail
            Default: 1
            If using a Juypter Notebook or Google Colab, the default is 0.

    **kwargs:
            Other arguments used in `model.forward` function. Passing *args is no
            longer supported.

Return:
    ModelStatistics object
            See torchinfo/model_statistics.py for more information.
"""

Examples

Get Model Summary as String

from torchinfo import summary

model_stats = summary(your_model, (1, 3, 28, 28), verbose=0)
summary_str = str(model_stats)
# summary_str contains the string representation of the summary!

Explore Different Configurations

class LSTMNet(nn.Module):
    def __init__(self, vocab_size=20, embed_dim=300, hidden_dim=512, num_layers=2):
        super().__init__()
        self.hidden_dim = hidden_dim
        self.embedding = nn.Embedding(vocab_size, embed_dim)
        self.encoder = nn.LSTM(embed_dim, hidden_dim, num_layers=num_layers, batch_first=True)
        self.decoder = nn.Linear(hidden_dim, vocab_size)

    def forward(self, x):
        embed = self.embedding(x)
        out, hidden = self.encoder(embed)
        out = self.decoder(out)
        out = out.view(-1, out.size(2))
        return out, hidden

summary(
    LSTMNet(),
    (1, 100),
    dtypes=[torch.long],
    verbose=2,
    col_width=16,
    col_names=["kernel_size", "output_size", "num_params", "mult_adds"],
    row_settings=["var_names"],
)
========================================================================================================================
Layer (type (var_name))                  Kernel Shape         Output Shape         Param #              Mult-Adds
========================================================================================================================
LSTMNet                                  --                   --                   --                   --
├─Embedding (embedding)                  [300, 20]            [1, 100, 300]        6,000                6,000
│    └─weight                            [300, 20]                                 └─6,000
├─LSTM (encoder)                         --                   [1, 100, 512]        3,768,320            376,832,000
│    └─weight_ih_l0                      [2048, 300]                               ├─614,400
│    └─weight_hh_l0                      [2048, 512]                               ├─1,048,576
│    └─bias_ih_l0                        [2048]                                    ├─2,048
│    └─bias_hh_l0                        [2048]                                    ├─2,048
│    └─weight_ih_l1                      [2048, 512]                               ├─1,048,576
│    └─weight_hh_l1                      [2048, 512]                               ├─1,048,576
│    └─bias_ih_l1                        [2048]                                    ├─2,048
│    └─bias_hh_l1                        [2048]                                    └─2,048
├─Linear (decoder)                       [512, 20]            [1, 100, 20]         10,260               10,260
│    └─weight                            [512, 20]                                 ├─10,240
│    └─bias                              [20]                                      └─20
========================================================================================================================
Total params: 3,784,580
Trainable params: 3,784,580
Non-trainable params: 0
Total mult-adds (M): 376.85
========================================================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.67
Params size (MB): 15.14
Estimated Total Size (MB): 15.80
========================================================================================================================

ResNet

import torchvision

model = torchvision.models.resnet152()
summary(model, (1, 3, 224, 224), depth=3)
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
ResNet                                   --                        --
├─Conv2d: 1-1                            [1, 64, 112, 112]         9,408
├─BatchNorm2d: 1-2                       [1, 64, 112, 112]         128
├─ReLU: 1-3                              [1, 64, 112, 112]         --
├─MaxPool2d: 1-4                         [1, 64, 56, 56]           --
├─Sequential: 1-5                        [1, 256, 56, 56]          --
│    └─Bottleneck: 2-1                   [1, 256, 56, 56]          --
│    │    └─Conv2d: 3-1                  [1, 64, 56, 56]           4,096
│    │    └─BatchNorm2d: 3-2             [1, 64, 56, 56]           128
│    │    └─ReLU: 3-3                    [1, 64, 56, 56]           --
│    │    └─Conv2d: 3-4                  [1, 64, 56, 56]           36,864
│    │    └─BatchNorm2d: 3-5             [1, 64, 56, 56]           128
│    │    └─ReLU: 3-6                    [1, 64, 56, 56]           --
│    │    └─Conv2d: 3-7                  [1, 256, 56, 56]          16,384
│    │    └─BatchNorm2d: 3-8             [1, 256, 56, 56]          512
│    │    └─Sequential: 3-9              [1, 256, 56, 56]          16,896
│    │    └─ReLU: 3-10                   [1, 256, 56, 56]          --
│    └─Bottleneck: 2-2                   [1, 256, 56, 56]          --

  ...
  ...
  ...

├─AdaptiveAvgPool2d: 1-9                 [1, 2048, 1, 1]           --
├─Linear: 1-10                           [1, 1000]                 2,049,000
==========================================================================================
Total params: 60,192,808
Trainable params: 60,192,808
Non-trainable params: 0
Total mult-adds (G): 11.51
==========================================================================================
Input size (MB): 0.60
Forward/backward pass size (MB): 360.87
Params size (MB): 240.77
Estimated Total Size (MB): 602.25
==========================================================================================

Multiple Inputs w/ Different Data Types

class MultipleInputNetDifferentDtypes(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1a = nn.Linear(300, 50)
        self.fc1b = nn.Linear(50, 10)

        self.fc2a = nn.Linear(300, 50)
        self.fc2b = nn.Linear(50, 10)

    def forward(self, x1, x2):
        x1 = F.relu(self.fc1a(x1))
        x1 = self.fc1b(x1)
        x2 = x2.type(torch.float)
        x2 = F.relu(self.fc2a(x2))
        x2 = self.fc2b(x2)
        x = torch.cat((x1, x2), 0)
        return F.log_softmax(x, dim=1)

summary(model, [(1, 300), (1, 300)], dtypes=[torch.float, torch.long])

Alternatively, you can also pass in the input_data itself, and torchinfo will automatically infer the data types.

input_data = torch.randn(1, 300)
other_input_data = torch.randn(1, 300).long()
model = MultipleInputNetDifferentDtypes()

summary(model, input_data=[input_data, other_input_data, ...])

Sequentials & ModuleLists

class ContainerModule(nn.Module):

    def __init__(self):
        super().__init__()
        self._layers = nn.ModuleList()
        self._layers.append(nn.Linear(5, 5))
        self._layers.append(ContainerChildModule())
        self._layers.append(nn.Linear(5, 5))

    def forward(self, x):
        for layer in self._layers:
            x = layer(x)
        return x


class ContainerChildModule(nn.Module):

    def __init__(self):
        super().__init__()
        self._sequential = nn.Sequential(nn.Linear(5, 5), nn.Linear(5, 5))
        self._between = nn.Linear(5, 5)

    def forward(self, x):
        out = self._sequential(x)
        out = self._between(out)
        for l in self._sequential:
            out = l(out)

        out = self._sequential(x)
        for l in self._sequential:
            out = l(out)
        return out

summary(ContainerModule(), (1, 5))
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
ContainerModule                          --                        --
├─ModuleList: 1-1                        --                        --
│    └─Linear: 2-1                       [1, 5]                    30
│    └─ContainerChildModule: 2-2         [1, 5]                    --
│    │    └─Sequential: 3-1              [1, 5]                    --
│    │    │    └─Linear: 4-1             [1, 5]                    30
│    │    │    └─Linear: 4-2             [1, 5]                    30
│    │    └─Linear: 3-2                  [1, 5]                    30
│    │    └─Sequential: 3                --                        --
│    │    │    └─Linear: 4-3             [1, 5]                    (recursive)
│    │    │    └─Linear: 4-4             [1, 5]                    (recursive)
│    │    └─Sequential: 3-3              [1, 5]                    (recursive)
│    │    │    └─Linear: 4-5             [1, 5]                    (recursive)
│    │    │    └─Linear: 4-6             [1, 5]                    (recursive)
│    │    │    └─Linear: 4-7             [1, 5]                    (recursive)
│    │    │    └─Linear: 4-8             [1, 5]                    (recursive)
│    └─Linear: 2-3                       [1, 5]                    30
==========================================================================================
Total params: 150
Trainable params: 150
Non-trainable params: 0
Total mult-adds (M): 0.00
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.00
Estimated Total Size (MB): 0.00
==========================================================================================

Contributing

All issues and pull requests are much appreciated! If you are wondering how to build the project:

  • torchinfo is actively developed using the lastest version of Python.
    • Changes should be backward compatible to Python 3.7, and will follow Python's End-of-Life guidance for old versions.
    • Run pip install -r requirements-dev.txt. We use the latest versions of all dev packages.
    • Run pre-commit install.
    • To use auto-formatting tools, use pre-commit run -a.
    • To run unit tests, run pytest.
    • To update the expected output files, run pytest --overwrite.
    • To skip output file tests, use pytest --no-output

References

  • Thanks to @sksq96, @nmhkahn, and @sangyx for providing the inspiration for this project.
  • For Model Size Estimation @jacobkimmel (details here)
Comments
  • Params and MACs Unit Specifier

    Params and MACs Unit Specifier

    It would be very useful to have a way to specify the units (MB, GB, etc ) in which the number of parameters and MACS are reported. This could help quickly compare different architectures.

    I think of something like adding arguments params_units and macs_units to the summary() function with a default value 'auto' to respect the current behavior.

    opened by richardtml 15
  • Support half-precision dtypes when calculating model size

    Support half-precision dtypes when calculating model size

    @TylerYep 2 tests from torchinfo_xl_test.py are failing for me. can you check if it works for you? cause those 2 tests don't work for me from master as well. hope this fixes the issue. do check out the code and lmk if i have to make any changes. ty

    opened by notjedi 14
  • nn.Parameter is ommitted (with a case)

    nn.Parameter is ommitted (with a case)

    Describe the bug nn.Parameter is omitted in summary when there are other pytorch predefined layers in the networks. Details are as follows:

    To Reproduce

    import torch
    import torch.nn as nn
    from torchinfo import summary
    
    class FCNets(nn.Module):
        def __init__(self, input_dim, hidden_dim, output_dim):
            # 2 layer fully connected networks
            super().__init__()
            # layer1 with nn.Parameter
            self.weight = nn.Parameter(torch.randn(input_dim, hidden_dim))
            self.bias = nn.Parameter(torch.randn(hidden_dim))
            # layer2 with nn.Linear
            self.fc2  = nn.Linear(hidden_dim, output_dim)
            # activation
            self.activation = nn.ReLU()
        
        def forward(self, x):
            # x.shape = [batch_size, input_dim]
            # layer1
            h = torch.mm(x, self.weight) + self.bias
            # activation
            h = self.activation(h)
            # layer2
            out = self.fc2(h)
            return out
    
    # device = torch.device("cuda:0")
    device = torch.device("cpu")
    x = torch.randn(3, 128).to(device)
    fc = FCNets(128, 64, 32).to(device)
    summary(fc, input_data=x)
    

    It seems that nn.Parameter is not compatible with other layers (nn.Module class).

    ==========================================================================================
    Layer (type:depth-idx)                   Output Shape              Param #
    ==========================================================================================
    FCNets                                   --                        --
    ├─ReLU: 1-1                              [3, 64]                   --
    ├─Linear: 1-2                            [3, 32]                   2,080
    ==========================================================================================
    Total params: 2,080
    Trainable params: 2,080
    Non-trainable params: 0
    Total mult-adds (M): 0.01
    ==========================================================================================
    Input size (MB): 0.00
    Forward/backward pass size (MB): 0.00
    Params size (MB): 0.01
    Estimated Total Size (MB): 0.01
    ==========================================================================================
    

    However, if we remove self.fc2, the output will be fine.

    Pytorch version: 1.7.1 (GPU) Torchinfo version: 1.5.3

    opened by zezhishao 12
  • Compute MACs for full input/output tensor

    Compute MACs for full input/output tensor

    This changes the value that is returned by summary. Up to now, this value was assuming a batch-size of 1 and, thus, ignored the batch size in the MAC computations. However, this does not work with recurrent NNs as these, e.g., share fully connected layers over many timesteps.

    With this change, the correct numbers are printed:

    seq_length = 100:
    ========================================================================================================
    Layer (type:depth-idx)                   Kernel Shape     Output Shape     Param #          Mult-Adds
    ========================================================================================================
    \u251c\u2500Embedding: 1-1                         [300, 20]        [1, 100, 300]    6,000            6,000
    \u251c\u2500LSTM: 1-2                              --               [1, 100, 512]    3,768,320        376,012,800
    |    \u2514\u2500weight_ih_l0                      [2048, 300]
    |    \u2514\u2500weight_hh_l0                      [2048, 512]
    |    \u2514\u2500weight_ih_l1                      [2048, 512]
    |    \u2514\u2500weight_hh_l1                      [2048, 512]
    \u251c\u2500Linear: 1-3                            [512, 20]        [1, 100, 20]     10,260           10,240
    ========================================================================================================
    Total params: 3,784,580
    Trainable params: 3,784,580
    Non-trainable params: 0
    Total mult-adds (M): 376.03
    ========================================================================================================
    Input size (MB): 0.00
    Forward/backward pass size (MB): 0.67
    Params size (MB): 15.14
    Estimated Total Size (MB): 15.80
    ========================================================================================================
    
    seq_length=10:
    ========================================================================================================
    Layer (type:depth-idx)                   Kernel Shape     Output Shape     Param #          Mult-Adds
    ========================================================================================================
    \u251c\u2500Embedding: 1-1                         [300, 20]        [1, 10, 300]     6,000            6,000
    \u251c\u2500LSTM: 1-2                              --               [1, 10, 512]     3,768,320        37,601,280
    |    \u2514\u2500weight_ih_l0                      [2048, 300]
    |    \u2514\u2500weight_hh_l0                      [2048, 512]
    |    \u2514\u2500weight_ih_l1                      [2048, 512]
    |    \u2514\u2500weight_hh_l1                      [2048, 512]
    \u251c\u2500Linear: 1-3                            [512, 20]        [1, 10, 20]      10,260           10,240
    ========================================================================================================
    Total params: 3,784,580
    Trainable params: 3,784,580
    Non-trainable params: 0
    Total mult-adds (M): 37.62
    ========================================================================================================
    Input size (MB): 0.00
    Forward/backward pass size (MB): 0.07
    Params size (MB): 15.14
    Estimated Total Size (MB): 15.20
    ========================================================================================================
    

    Fixes #32

    opened by TE-StefanUhlich 12
  • Error when using nn.UninitializedParameter

    Error when using nn.UninitializedParameter

    Describe the bug A ValueError is raised when trying to use unavailable operation nelement() on an UninitializedParameter.

    summary method goes over all the modules in the model and tries to get the number of parameters, but that's not possible with an UninitializedParameter.

    To Reproduce

    class Net(nn.Module):
        def __init__(self):
            super().__init__()
            self.param = nn.UninitializedParameter()
        
        def init_param(self):
            self.param = nn.Parameter(torch.zeros(1))
        
        def forward(self, x):
            self.init_param()
            return x
    
    net = Net()
    torchinfo.summary(net, input_size=(1, 1))
    

    Output First part of the stack trace:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    ~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
        260             if isinstance(x, (list, tuple)):
    --> 261                 _ = model.to(device)(*x, **kwargs)
        262             elif isinstance(x, dict):
    
    ~\miniconda3\envs\cudalab\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
       1108             for hook in (*_global_forward_pre_hooks.values(), *self._forward_pre_hooks.values()):
    -> 1109                 result = hook(self, input)
       1110                 if result is not None:
    
    ~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\torchinfo.py in pre_hook(***failed resolving arguments***)
        457         info = LayerInfo(var_name, module, curr_depth, idx[curr_depth], parent_info)
    --> 458         info.calculate_num_params()
        459         info.check_recursive(summary_list)
    
    ~\miniconda3\envs\cudalab\lib\site-packages\torchinfo\layer_info.py in calculate_num_params(self)
        125         for name, param in self.module.named_parameters():
    --> 126             self.num_params += param.nelement()
        127             if param.requires_grad:
    
    ~\miniconda3\envs\cudalab\lib\site-packages\torch\nn\parameter.py in __torch_function__(cls, func, types, args, kwargs)
        120             return super().__torch_function__(func, types, args, kwargs)
    --> 121         raise ValueError(
        122             'Attempted to use an uninitialized parameter in {}. '
    
    ValueError: Attempted to use an uninitialized parameter in <method 'numel' of 'torch._C._TensorBase' objects>. This error happens when you are using a `LazyModule` or explicitly manipulating `torch.nn.parameter.UninitializedParameter` objects. When using LazyModules Call `forward` with a dummy batch to initialize the parameters before calling torch functions
    

    Expected behavior To check if a Module is an instance of an UninitializedParameter and skip calls to unavailable operations.

    It would still be nice to show somehow in the printed summary table (maybe 'uninitialized'?).

    Context:

    • OS: Windows 10
    • Python 3.9.7
    • pytorch 1.10.1 (py3.9_cpu_0)
    • torchinfo 1.6.0
    opened by vladvrabie 11
  • MACS calculation error when the model structure is nested

    MACS calculation error when the model structure is nested

    Hi, thanks for the tool you provided, very useful. But I also found a bug when I want to calculate each layer's Mult-Addss of a nested model. I got something like this:

    Xnip2021-06-27_20-41-37

    For most of the layers like TMVANet( (encoder): TMVANet_Encoder( (rd_encoding_branch): EncodingBranch( (double_3dconv_block1): Double3DConvBlock ... I could not get the Mult-Adds information correctly. I assume it is because the block was wrapped several times and could not be handled correctly? Could you please tell me the ways to solve this problem?

    The initial part of my model looks like this: network

    opened by james20141606 11
  • Add support for pruned models

    Add support for pruned models

    According to the pytorch documentation on pruning, the original parameter is replaced with one ending with _orig and a new buffer ending with _mask. The mask contains 0s and 1s based on which the correct parameters are chosen.

    All instances of param.nelements() have been replaced by a variable cur_params whose value is set based on whether it is a masked model or not. To keep consistency with the rest of the code base, the _orig is removed from the name variable right after the calculation of cur_params.

    opened by MajorCarrot 9
  • size estimation of model assumes floats everywhere

    size estimation of model assumes floats everywhere

    I can see in model_statistics.py L81 that floats are assumed everywhere. In mixed precision some weights are in fp16 or tf16 (truncated float). Quantized models use int8 weights and a separate float as parameter. The estimated size of the model should be correct, and it should depend on what model we hand over to summary(...).

    opened by lizardzandwizardz 9
  • update support to torchvison mask/faster rcnn model summary

    update support to torchvison mask/faster rcnn model summary

    update support to torchvison mask/faster rcnn model summary

    • update layer_info to support OrderedDict and ImageList case where used within torchvison/detection
    • unittest passed
    opened by michiroooo 9
  • [SyntaxError]

    [SyntaxError]

    Hi, I just installed the latest version on a GCP instance with python 3.5 and got this error. It's odd as it works on my local machine with python 3.7.

    from torchsummary import summary
    
    Traceback (most recent call last):
    
      File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
      File "<ipython-input-1-bac45dd5d4db>", line 6, in <module>
        from torchsummary import summary
    
      File "/home/michalnarbutt/.local/lib/python3.5/site-packages/torchsummary/__init__.py", line 1, in <module>
        from .torchsummary import summary
    
      File "/home/michalnarbutt/.local/lib/python3.5/site-packages/torchsummary/torchsummary.py", line 30
        **kwargs: Any,
                     ^
    SyntaxError: invalid syntax
    
    
    opened by Ostyk 9
  •  'Conv2d' object has no attribute 'weight_mask'

    'Conv2d' object has no attribute 'weight_mask'

    Hi I'm getting an error for a simple vgg16 implementation

    Traceback (most recent call last):
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py", line 272, in forward_pass
        _ = model.to(device)(*x, **kwargs)
      File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/user/Projects/net/vgg.py", line 97, in forward
        out = self.features(x)
      File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl
        result = hook(self, input)
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py", line 500, in pre_hook
        info.calculate_num_params()
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/layer_info.py", line 151, in calculate_num_params
        cur_params, name = self.get_param_count(name, param)
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/layer_info.py", line 139, in get_param_count
        torch.sum(rgetattr(self.module, f"{without_suffix}_mask"))
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/layer_info.py", line 19, in rgetattr
        module = getattr(module, attr_i)
      File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Conv2d' object has no attribute 'weight_mask'
    The above exception was the direct cause of the following exception:
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3444, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
      File "<ipython-input-45-e9b0e1aa526c>", line 1, in <module>
        summary(v, (1, 3, 224, 224), depth=3, col_names=["input_size", "output_size", "kernel_size", "num_params"])
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py", line 201, in summary
        summary_list = forward_pass(
      File "/home/user/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py", line 281, in forward_pass
        raise RuntimeError(
    RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
    
    opened by realsarm 8
  • AttributeError when input type has no element_size() method

    AttributeError when input type has no element_size() method

    The bug

    Hi, I see a problem with the code calculating the size of the layers:

    In layer_info.py line 109:

    if hasattr(inputs[0], "size") and callable(inputs[0].size):
        return list(inputs[0].size()), inputs[0].element_size()
    

    I have the problem, that I use a package with modified tensors which have no "element_size" method. I.e. the code crashes at that point.

    Expected behavior

    What about this

    if hasattr(inputs[0], "size") and callable(inputs[0].size):
        if hasattr(inputs[0], "element_size") and callable(inputs[0].element_size):
            return list(inputs[0].size()), inputs[0].element_size()
        else:
            #Maybe add a warning here
            return list(inputs[0].size()), 0
    
    
    opened by lueisert 3
  • Percentage FLOPS or Multiply Adds - inspired by #199

    Percentage FLOPS or Multiply Adds - inspired by #199

    Option for column representing Percentage FLOPS or Multiply Adds. (similar to #199)

    I think this option would be really useful to see which part of model should be optimized if necessary. It also useful to have an idea about the scaling of the model as make your model bigger and bigger.

    At first sight, it seems that this could be implemented in a way similar to that of #199.

    If this seems reasonable enough, I can come up with PR.

    opened by mert-kurttutan 4
  • AttributeError: 'tuple' object has no attribute 'size'

    AttributeError: 'tuple' object has no attribute 'size'

    fixed #141 . I have tested it only on one huggingface model but it should work for every model. The only problem is that I could not fixe the verification of the out file

    opened by fabiofumarola 3
  • One complex parameter should count as two params

    One complex parameter should count as two params

    As all models' parameters counting traces back here https://github.com/TylerYep/torchinfo/blob/8b3ae72c7cac677176f37450ee27b8c860f803cd/torchinfo/layer_info.py#L154-L170

    there is no checking on whether the parameter tensor is complex or real. If a parameter is complex, such as a + i b, then it represents actually two parameters (for counting MACs/FLOPs purpose).

    Of course this PR might not be conforming with torchinfo's dev, feel free to close it, I hope complex would be considered in next version.

    opened by scaomath 2
  • nn.ParameterList omitted again in v1.7.1

    nn.ParameterList omitted again in v1.7.1

    Hi there. I was trying to inspect mmoe model from mmoe, which has nn.PrameterList.

    
    class Expert(nn.Module):
        def __init__(self, input_size, output_size, hidden_size):
            super(Expert, self).__init__()
            self.fc1 = nn.Linear(input_size, hidden_size)
            self.fc2 = nn.Linear(hidden_size, output_size)
            self.relu = nn.ReLU()
            self.dropout = nn.Dropout(0.3)
    
        def forward(self, x):
            out = self.fc1(x)
            out = self.relu(out)
            out = self.dropout(out)
            out = self.fc2(out)
            return out
    
    
    class Tower(nn.Module):
        def __init__(self, input_size, output_size, hidden_size):
            super(Tower, self).__init__()
            self.fc1 = nn.Linear(input_size, hidden_size)
            self.fc2 = nn.Linear(hidden_size, output_size)
            self.relu = nn.ReLU()
            self.dropout = nn.Dropout(0.4)
            self.sigmoid = nn.Sigmoid()
        def forward(self, x):
            out = self.fc1(x)
            out = self.relu(out)
            out = self.dropout(out)
            out = self.fc2(out)
            out = self.sigmoid(out)
            return out
    
    class MMOE(nn.Module):
        def __init__(self, input_size, num_experts, experts_out, experts_hidden, towers_hidden, tasks):
            super(MMOE, self).__init__()
            self.input_size = input_size
            self.num_experts = num_experts
            self.experts_out = experts_out
            self.experts_hidden = experts_hidden
            self.towers_hidden = towers_hidden
            self.tasks = tasks
    
            self.softmax = nn.Softmax(dim=1)
    
            self.experts = nn.ModuleList([Expert(self.input_size, self.experts_out, self.experts_hidden) for i in range(self.num_experts)])
            self.w_gates = nn.ParameterList([nn.Parameter(torch.randn(input_size, num_experts), requires_grad=True) for i in range(self.tasks)])
            self.towers = nn.ModuleList([Tower(self.experts_out, 1, self.towers_hidden) for i in range(self.tasks)])
    
        def forward(self, x):
            experts_o = [e(x) for e in self.experts]
            experts_o_tensor = torch.stack(experts_o)
    
            gates_o = [self.softmax(x @ g) for g in self.w_gates]
    
            tower_input = [g.t().unsqueeze(2).expand(-1, -1, self.experts_out) * experts_o_tensor for g in gates_o]
            tower_input = [torch.sum(ti, dim=0) for ti in tower_input]
    
            final_output = [t(ti) for t, ti in zip(self.towers, tower_input)]
            return final_output
    
    model = MMOE(input_size=499, num_experts=6, experts_out=16, experts_hidden=32, towers_hidden=8, tasks=2)
    
    torchinfo.summary(model, input_size=(1024, 499),
                      col_names=[
                          "kernel_size", 
                          "input_size",
                          "output_size", 
                          "num_params", 
                          "trainable",
                          "mult_adds"
                          ],
                      col_width=16,
                      row_settings=["var_names", "depth"],
                      )
    
    

    I was on v1.7.1 and I got something like this.

    ========================================================================================================================================
    Layer (type (var_name):depth-idx)        Kernel Shape     Input Shape      Output Shape     Param #          Trainable        Mult-Adds
    ========================================================================================================================================
    MMOE (MMOE)                              --               [1024, 499]      [1024, 1]        5,988            True             --
    ├─ModuleList (experts): 1-1              --               --               --               --               True             --
    │    └─Expert (0): 2-1                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-1            --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-2             --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-3       --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-4            --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (1): 2-2                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-5            --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-6             --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-7       --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-8            --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (2): 2-3                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-9            --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-10            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-11      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-12           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (3): 2-4                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-13           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-14            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-15      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-16           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (4): 2-5                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-17           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-18            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-19      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-20           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (5): 2-6                   --               [1024, 499]      [1024, 16]       --               True             --
    │    │    └─Linear (fc1): 3-21           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-22            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-23      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-24           --               [1024, 32]       [1024, 16]       528              True             540,672
    ├─Softmax (softmax): 1-2                 --               [1024, 6]        [1024, 6]        --               --               --
    ├─Softmax (softmax): 1-3                 --               [1024, 6]        [1024, 6]        --               --               --
    ├─ModuleList (towers): 1-4               --               --               --               --               True             --
    │    └─Tower (0): 2-7                    --               [1024, 16]       [1024, 1]        --               True             --
    │    │    └─Linear (fc1): 3-25           --               [1024, 16]       [1024, 8]        136              True             139,264
    │    │    └─ReLU (relu): 3-26            --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Dropout (dropout): 3-27      --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Linear (fc2): 3-28           --               [1024, 8]        [1024, 1]        9                True             9,216
    │    │    └─Sigmoid (sigmoid): 3-29      --               [1024, 1]        [1024, 1]        --               --               --
    │    └─Tower (1): 2-8                    --               [1024, 16]       [1024, 1]        --               True             --
    │    │    └─Linear (fc1): 3-30           --               [1024, 16]       [1024, 8]        136              True             139,264
    │    │    └─ReLU (relu): 3-31            --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Dropout (dropout): 3-32      --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Linear (fc2): 3-33           --               [1024, 8]        [1024, 1]        9                True             9,216
    │    │    └─Sigmoid (sigmoid): 3-34      --               [1024, 1]        [1024, 1]        --               --               --
    ========================================================================================================================================
    Total params: 105,446
    Trainable params: 105,446
    Non-trainable params: 0
    Total mult-adds (M): 101.84
    ========================================================================================================================================
    Input size (MB): 2.04
    Forward/backward pass size (MB): 2.51
    Params size (MB): 0.40
    Estimated Total Size (MB): 4.95
    ========================================================================================================================================
    

    This seems to be great, nearly all things are included. But nn.ParameterList (w_gates) is omitted. I went through #54 and #84. Seems to be mentioned before and I downgraded it to v1.7.0

    I got result as follow, which includes nn.ParameterList, but result itself seems to be incorrect?

    ========================================================================================================================================
    Layer (type (var_name):depth-idx)        Kernel Shape     Input Shape      Output Shape     Param #          Trainable        Mult-Adds
    ========================================================================================================================================
    MMOE (MMOE)                              --               [1024, 499]      [1024, 1]        --               True             --
    ├─Softmax (softmax): 1-6                 --               [1024, 6]        [1024, 6]        --               --               --
    ├─ModuleList (experts): 1-2              --               --               --               16,528           True             --
    │    └─Expert (0): 2-1                   --               [1024, 499]      [1024, 16]       16,528           True             --
    │    │    └─Linear (fc1): 3-2            --               [1024, 499]      [1024, 32]       (recursive)      True             16,384,000
    │    │    └─Linear (fc1): 3-2            --               [1024, 499]      [1024, 32]       (recursive)      True             16,384,000
    │    │    └─ReLU (relu): 3-3             --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-4       --               [1024, 32]       [1024, 32]       --               --               --
    │    └─Expert (1): 2-3                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (0): 2                     --               --               --               --               --               --
    │    │    └─Linear (fc2): 3-5            --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (1): 2-3                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    │    └─Linear (fc1): 3-6            --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-7             --               [1024, 32]       [1024, 32]       --               --               --
    │    └─Expert (2): 2-5                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (1): 2                     --               --               --               --               --               --
    │    │    └─Dropout (dropout): 3-8       --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-9            --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (2): 2-5                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    │    └─Linear (fc1): 3-10           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    └─Expert (3): 2-7                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (2): 2                     --               --               --               --               --               --
    │    │    └─ReLU (relu): 3-11            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-12      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-13           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (3): 2-7                   --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (4): 2-10                  --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (3): 2                     --               --               --               --               --               --
    │    │    └─Linear (fc1): 3-14           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-15            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-16      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-17           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (5): 2-12                  --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    └─Expert (4): 2-10                  --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    │    └─Linear (fc1): 3-18           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-19            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-20      --               [1024, 32]       [1024, 32]       --               --               --
    ├─ParameterList (w_gates): 1-3           --               --               --               5,988            True             --
    ├─ModuleList (towers): 1-4               --               --               --               --               True             --
    │    └─Tower (0): 2-13                   --               [1024, 16]       [1024, 1]        (recursive)      True             --
    ├─ModuleList (experts): 1-2              --               --               --               16,528           True             --
    │    └─Expert (4): 2                     --               --               --               --               --               --
    │    │    └─Linear (fc2): 3-21           --               [1024, 32]       [1024, 16]       528              True             540,672
    │    └─Expert (5): 2-12                  --               [1024, 499]      [1024, 16]       (recursive)      True             --
    │    │    └─Linear (fc1): 3-22           --               [1024, 499]      [1024, 32]       16,000           True             16,384,000
    │    │    └─ReLU (relu): 3-23            --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Dropout (dropout): 3-24      --               [1024, 32]       [1024, 32]       --               --               --
    │    │    └─Linear (fc2): 3-25           --               [1024, 32]       [1024, 16]       528              True             540,672
    ├─Softmax (softmax): 1-5                 --               [1024, 6]        [1024, 6]        --               --               --
    ├─ModuleList (towers): 1-4               --               --               --               --               True             --
    │    └─Tower (1): 2                      --               --               --               --               --               --
    │    │    └─Linear (fc2): 3-35           --               [1024, 8]        [1024, 1]        (recursive)      True             9,216
    ├─Softmax (softmax): 1-6                 --               [1024, 6]        [1024, 6]        --               --               --
    ├─ModuleList (towers): 1-4               --               --               --               --               True             --
    │    └─Tower (0): 2-13                   --               [1024, 16]       [1024, 1]        (recursive)      True             --
    │    │    └─Linear (fc1): 3-27           --               [1024, 16]       [1024, 8]        136              True             139,264
    │    │    └─ReLU (relu): 3-28            --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Dropout (dropout): 3-29      --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Linear (fc2): 3-30           --               [1024, 8]        [1024, 1]        9                True             9,216
    │    │    └─Sigmoid (sigmoid): 3-31      --               [1024, 1]        [1024, 1]        --               --               --
    │    └─Tower (1): 2-14                   --               [1024, 16]       [1024, 1]        9                True             --
    │    │    └─Linear (fc1): 3-32           --               [1024, 16]       [1024, 8]        136              True             139,264
    │    │    └─ReLU (relu): 3-33            --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Dropout (dropout): 3-34      --               [1024, 8]        [1024, 8]        --               --               --
    │    │    └─Linear (fc2): 3-35           --               [1024, 8]        [1024, 1]        (recursive)      True             9,216
    │    │    └─Sigmoid (sigmoid): 3-36      --               [1024, 1]        [1024, 1]        --               --               --
    ========================================================================================================================================
    Total params: 105,446
    Trainable params: 105,446
    Non-trainable params: 0
    Total mult-adds (M): 118.24
    ========================================================================================================================================
    Input size (MB): 2.04
    Forward/backward pass size (MB): 2.24
    Params size (MB): 0.36
    Estimated Total Size (MB): 4.64
    ========================================================================================================================================
    
    opened by github-0-searcher 2
  • estimate model size is different with nvidia-smi usage

    estimate model size is different with nvidia-smi usage

    Describe the bug estimate model size is different with nvidia-smi usage

    To Reproduce

    1. used code, and command line
    2. The code will run on the cuda:2 device
    import torch
    import torch.nn as nn
    import timm
    import torchvision
    
    import argparse
    
    from torchinfo import summary
    
    
    
    #data load#
    device_num = 2
    device = torch.device("cuda:"+str(device_num))
    
    num_classes = 2
    # model name, augmentation, sche 설정 받기
    
    parser = argparse.ArgumentParser()
    
    parser.add_argument('--model', required=True)
    args = parser.parse_args()
    
    # model name, augmentation, sche 설정 받기
    
    each_model = args.model
    #best model
    
    model = ''
    if each_model == 'CvT-21' :
        model = torch.load('../ref_model/whole_CvT-21-384x384-IN-1k_2class.pt')
    
    elif each_model == 'MLP-Mixer-b16' :
        model = timm.create_model('mixer_b16_224', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'Beit-base-patch16' :
        model = timm.create_model('beit_base_patch16_224', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'ViT-base-16' :
        model = timm.create_model('vit_base_patch16_224', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'ResNet101' :
        model = timm.create_model('resnet101', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'MobileNetV2' :
        model = timm.create_model('mobilenetv2_100', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'DenseNet121' :
        model = timm.create_model('densenet121', pretrained=True, num_classes=num_classes)
    
    elif each_model == 'EfficientNetB0' :
        model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=num_classes)
    
    if each_model == 'ShuffleNetV2' :
        model = torchvision.models.shufflenet_v2_x1_0(pretrained=True)
        num_f = model.fc.in_features
        model.fc = nn.Linear(num_f, num_classes) #마지막 linear layer의 아웃풋을 2로 만들기
    
    elif each_model == 'gmlp_s16' :
        model = timm.create_model('gmlp_s16_224', pretrained=True, num_classes=num_classes)
    
    
    elif each_model == 'resmlp_24' :
        model = timm.create_model('resmlp_24_224', pretrained=True, num_classes=num_classes)
    
    
    elif each_model == 'mobilevit-s' :
        model = timm.create_model('mobilevit_s', pretrained=True, num_classes=num_classes)
    
    
    elif each_model == 'mobilevit-xs' :
        model = timm.create_model('mobilevit_xs', pretrained=True, num_classes=num_classes)
    
    
    elif each_model == 'mobilevit-xxs' :
        model = timm.create_model('mobilevit_xxs', pretrained=True, num_classes=num_classes)
    
    
    model = model.to(device)
    model.eval()
    
    summary(model, input_size=(1, 3, 224, 224), mode='eval', device = device)
    
    batch_size = 1
    data_shape = (3, 224, 224)
    random_data = torch.rand((batch_size, *data_shape)).to(device)
                   
    
    with torch.no_grad():
    
        outputs = model(random_data)
    

    python img1_test_original_testset_serve_2c_gpumem_forgit.py --model 'MobileNetV2'

    Expected behavior nvidia-smi memory usage is same with estimate model size

    Screenshots image image

    Additional context But those 2 values had different values (over around 1000MB) I also already checked the https://github.com/TylerYep/torchinfo/issues/149#issue-1291452433 , but I could not reproduce similar values with Nvidia-smi GPU usage and the estimated total size of torch info.

    Are there any points I missed in the code? or was It really caused by other things, not by my code?

    Thanks!

    + In case of shufflenetV2 make same problem python img1_test_original_testset_serve_2c_gpumem_forgit.py --model 'ShuffleNetV2'

    image image

    +2 I did the simple check for moving data on GPU devices and it yielded this GPU usage

    import torch
    device_num = 2
    device = torch.device("cuda:"+str(device_num))
    
    batch_size = 1
    data_shape = (3, 224, 224)
    random_data = torch.rand((batch_size, *data_shape)).to(device)
    

    image

    Was this might involve in the issue?

    opened by YHYeooooong 6
Releases(v1.7.1)
Owner
Tyler Yep
Hi, I'm Tyler!
Tyler Yep
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"

Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Ro

Meta Research 1.2k Jan 02, 2023
This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search"

InvariantAncestrySearch This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search

Phillip Bredahl Mogensen 0 Feb 02, 2022
Framework web SnakeServer.

SnakeServer - Framework Web 🐍 Documentação oficial do framework SnakeServer. Conteúdo Sobre Como contribuir Enviar relatórios de segurança Pull reque

Jaedson Silva 0 Jul 21, 2022
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

97 Dec 05, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Deepak Nandwani 1 Dec 31, 2021
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
Hyperparameters tuning and features selection are two common steps in every machine learning pipeline.

shap-hypetune A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models. Overview Hyperparameters t

Marco Cerliani 422 Jan 08, 2023
This project provides an unsupervised framework for mining and tagging quality phrases on text corpora with pretrained language models (KDD'21).

UCPhrase: Unsupervised Context-aware Quality Phrase Tagging To appear on KDD'21...[pdf] This project provides an unsupervised framework for mining and

Xiaotao Gu 146 Dec 22, 2022
Target Propagation via Regularized Inversion

Target Propagation via Regularized Inversion The present code implements an ideal formulation of target propagation using regularized inverses compute

Vincent Roulet 0 Dec 02, 2021
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations The code of: Weakly Supervised Learning of Instance Segmentation with I

Jiwoon Ahn 472 Dec 29, 2022
The official PyTorch code for 'DER: Dynamically Expandable Representation for Class Incremental Learning' accepted by CVPR2021

DER.ClassIL.Pytorch This repo is the official implementation of DER: Dynamically Expandable Representation for Class Incremental Learning (CVPR 2021)

rhyssiyan 108 Jan 01, 2023
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 07, 2022
A Gura parser implementation for Python

Gura Python parser This repository contains the implementation of a Gura (compliant with version 1.0.0) format parser in Python. Installation pip inst

Gura Config Lang 19 Jan 25, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022
The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 2021)

EIGNN: Efficient Infinite-Depth Graph Neural Networks The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 20

Juncheng Liu 14 Nov 22, 2022