How to use the fastai.__version__ function in fastai

To help you get started, we’ve selected a few fastai examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github fastai / fastai / tests / test_utils_fastai.py View on Github external
def test_has_version():
    this_tests('na')
    assert fastai.__version__
github Kaggle / docker-python / tests / test_fastai.py View on Github external
def test_has_version(self):
        self.assertGreater(len(fastai.__version__), 1)
github fastai / fastai / tests / test_utils.py View on Github external
def test_show_install(capsys):
    this_tests(show_install)
    show_install()
    captured = capsys.readouterr()
    #print(captured.out)
    match = re.findall(rf'fastai\s+: {fastai.__version__}', captured.out)
    assert match
    match = re.findall(rf'torch\s+: {re.escape(torch.__version__)}', captured.out)
    assert match
github fastai / fastai / fastai / utils / collect_env.py View on Github external
def show_install(show_nvidia_smi:bool=False):
    "Print user's setup information: python -c 'import fastai; fastai.show_install()'"

    import platform, fastai.version

    rep = []
    opt_mods = []

    rep.append(["=== Software ===", None])

    rep.append(["python", platform.python_version()])
    rep.append(["fastai", fastai.__version__])
    rep.append(["fastprogress", fastprogress.__version__])
    rep.append(["torch",  torch.__version__])

    # nvidia-smi
    cmd = "nvidia-smi"
    have_nvidia_smi = False
    try:
        result = subprocess.run(cmd.split(), shell=False, check=False, stdout=subprocess.PIPE)
    except:
        pass
    else:
        if result.returncode == 0 and result.stdout:
            have_nvidia_smi = True

    # XXX: if nvidia-smi is not available, another check could be:
    # /proc/driver/nvidia/version on most systems, since it's the
github microsoft / computervision-recipes / classification / python / 03_training_accuracy_vs_speed.py View on Github external
#   * [Testing parameters](#testing-parameters)
# * [Appendix](#appendix)
#   * [Learning rate](#appendix-learning-rate)
#   * [Image size](#appendix-imsize)
#   * [How we found good parameters](#appendix-good-parameters)

# # Training a High Accuracy, Fast Inference, or Small Size Classifier <a name="model"></a>

# Let's first verify our fast.ai version:

# In[1]:


import fastai

fastai.__version__


# Ensure edits to libraries are loaded and plotting is shown in the notebook.

# In[2]:


get_ipython().run_line_magic("reload_ext", "autoreload")
get_ipython().run_line_magic("autoreload", "2")
get_ipython().run_line_magic("matplotlib", "inline")


# Import all the functions we need.

# In[3]:
github microsoft / computervision-recipes / similarity / notebooks / script_exploring_hyperparameters.py View on Github external
valid_features = compute_features_learner(
        data, DatasetType.Valid, learn, embedding_layer
    )

    # For each comparative set compute the distances between the query image and all reference images
    for cs in comparative_sets:
        cs.compute_distances(valid_features)

    # Compute the median rank of the positive example over all comparative sets
    ranks = positive_image_ranks(comparative_sets)
    median_rank = np.median(ranks)
    return median_rank


if __name__ == "__main__":
    print(f"Fast.ai version = {fastai.__version__}")

    # -------------------------------------------------------
    #  IMAGE SIMILARITY CODE
    # -------------------------------------------------------
    if False:
        # Set dataset, model and evaluation parameters
        DATA_PATH = (
            "C:/Users/pabuehle/Desktop/ComputerVision/data/tiny"
        )  # unzip_url(Urls.fridge_objects_tiny_path, exist_ok=True)

        # DNN configuration and learning parameters
        EPOCHS_HEAD = 0
        EPOCHS_BODY = 0
        LEARNING_RATE = 1e-4
        DROPOUT_RATE = 0.5
        BATCH_SIZE = (
github asvcode / Vision_UI / vision_ui.py View on Github external
def on_button_clicked_info(b):
        with out:
            clear_output()
            print(f'Fastai Version: {fastai.__version__}')
            print(f'Cuda: {torch.cuda.is_available()}')
            print(f'GPU: {torch.cuda.get_device_name(0)}')
            print(f'Python version: {sys.version}')
            print(psutil.cpu_percent())
            print(psutil.virtual_memory())  # physical memory usage
            print('memory % used:', psutil.virtual_memory()[2])
github microsoft / computervision-recipes / classification / python / 11_exploring_hyperparameters.py View on Github external
# Let's say we want to learn more about __how different learning rates and different image sizes affect our model's accuracy when restricted to 10 epochs__, and we want to build an experiment to test out these hyperparameters. We also want to try these parameters out on two different variations of the dataset - one where the images are kept raw (maybe there is a watermark on the image) and one where the images have been altered (the same dataset where there was some attempt to remove the watermark).
#
# In this notebook, we'll walk through how we use the Parameter Sweeper module with the following:
#
# - use python to perform this experiment
# - use the CLI to perform this experiment
# - evalute the results using Pandas

# Check out fastai version.

# In[1]:


import fastai

fastai.__version__


# Ensure edits to libraries are loaded and plotting is shown in the notebook.

# In[2]:


get_ipython().run_line_magic("reload_ext", "autoreload")
get_ipython().run_line_magic("autoreload", "2")
get_ipython().run_line_magic("matplotlib", "inline")


# We start by importing the utilities we need.

# In[3]:
github microsoft / computervision-recipes / classification / python / 02_multilabel_classification.py View on Github external
partial,
)

# local modules
from utils_cv.classification.model import (
    TrainMetricsRecorder,
    hamming_accuracy,
    zero_one_accuracy,
    get_optimal_threshold,
)
from utils_cv.classification.plot import plot_thresholds
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from utils_cv.common.gpu import which_processor

print(f"Fast.ai version = {fastai.__version__}")
which_processor()


# Like before, we set some parameters. This time, we can use one of the multi-label datasets that come with this repo.

# In[3]:


DATA_PATH = unzip_url(Urls.multilabel_fridge_objects_path, exist_ok=True)
EPOCHS = 10
LEARNING_RATE = 1e-4
IM_SIZE = 300
BATCH_SIZE = 16
ARCHITECTURE = models.resnet18
github microsoft / computervision-recipes / classification / python / 00_webcam.py View on Github external
import io
import os
import time
import urllib.request

import fastai
from fastai.vision import models, open_image
from ipywebrtc import CameraStream, ImageRecorder
from ipywidgets import HBox, Label, Layout, Widget

from utils_cv.common.data import data_path
from utils_cv.common.gpu import which_processor
from utils_cv.classification.data import imagenet_labels
from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner

print(f"Fast.ai: {fastai.__version__}")
which_processor()


# ## 1. Load Pretrained Model
# 
# We use pretrained<sup>*</sup> ResNet18 which is a relatively small and fast among the well-known CNNs architectures. The [reported error rate](https://pytorch.org/docs/stable/torchvision/models.html) of the model on ImageNet is 30.24% for top-1 and 10.92% for top-5 (top five labels considered most probable by the model).
# 
# The model expects input RGB-images to be loaded into a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225], which is defined in [`fastai.vision.imagenet_stats`](https://github.com/fastai/fastai/blob/master/fastai/vision/data.py#L78).
# 
# The output of the model is the probability distribution of the classes in ImageNet. To convert them into human-readable labels, we utilize the label json file used from [Keras](https://github.com/keras-team/keras/blob/master/keras/applications/imagenet_utils.py).
# 
# &gt; \* The model is pretrained on ImageNet. Note you can load your own model by using `learn = load_learner(path)` and use it. To learn more about model-export and load, see fastai [doc](https://docs.fast.ai/basic_train.html#Deploying-your-model)).
# 

# In[3]: