How to use lhotse - 3 common examples

To help you get started, we’ve selected a few lhotse examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github mpariente / AsSteroid / egs / MiniLibriMix / lhotse / train.py View on Github external
def main(conf):


    train_set = LhotseDataset(OnTheFlyMixing(), 300, 0)

    val_set = LhotseDataset(PreMixedSourceSeparationDataset(sources_set=CutSet.from_yaml('data/cuts_sources.yml.gz'),
                                                mixtures_set=CutSet.from_yaml('data/cuts_mix.yml.gz'),
                                                root_dir="."), 300, 0)

    train_loader = DataLoader(train_set, shuffle=True,
                              batch_size=conf['training']['batch_size'],
                              num_workers=conf['training']['num_workers'],
                              drop_last=True)
    val_loader = DataLoader(val_set, shuffle=False,
                            batch_size=conf['training']['batch_size'],
                            num_workers=conf['training']['num_workers'],
                            drop_last=True)
    # Update number of source values (It depends on the task)
    #conf['masknet'].update({'n_src': train_set.n_src})

    class Model(torch.nn.Module):
        def __init__(self, net):
github mpariente / AsSteroid / egs / MiniLibriMix / lhotse / train.py View on Github external
def main(conf):


    train_set = LhotseDataset(OnTheFlyMixing(), 300, 0)

    val_set = LhotseDataset(PreMixedSourceSeparationDataset(sources_set=CutSet.from_yaml('data/cuts_sources.yml.gz'),
                                                mixtures_set=CutSet.from_yaml('data/cuts_mix.yml.gz'),
                                                root_dir="."), 300, 0)

    train_loader = DataLoader(train_set, shuffle=True,
                              batch_size=conf['training']['batch_size'],
                              num_workers=conf['training']['num_workers'],
                              drop_last=True)
    val_loader = DataLoader(val_set, shuffle=False,
                            batch_size=conf['training']['batch_size'],
                            num_workers=conf['training']['num_workers'],
                            drop_last=True)
    # Update number of source values (It depends on the task)
    #conf['masknet'].update({'n_src': train_set.n_src})

    class Model(torch.nn.Module):
        def __init__(self, net):
            super(Model, self).__init__()
github mpariente / AsSteroid / egs / MiniLibriMix / lhotse / local / dataset_wrapper.py View on Github external
def parse_yaml(y):
    y = load_yaml(y)

    rec_ids = {}
    for entry in y:
        key = entry["features"]["recording_id"]
        if key not in rec_ids.keys():
            rec_ids[key] = [entry["features"]["storage_path"]]
        else:
            rec_ids[key].append(entry["features"]["storage_path"])

    return rec_ids

lhotse

Data preparation for speech processing models training.

Apache-2.0
Latest version published 4 days ago

Package Health Score

82 / 100
Full package analysis