Delete an entire directory tree with Python | shutil.rmtree () method

Python Methods and Functions | rmtree

shutil.rmtree () is used to delete an entire directory tree, the path must point to a directory (not a symbolic link to a directory).

Syntax : shutil.rmtree (path, ignore_errors = False, onerror = None)

Parameters:
path: A path -like object representing a file path. A path-like object is either a string or bytes object representing a path.
ignore_errors: If ignore_errors is true, errors resulting from failed removals will be ignored.
oneerror: If ignore_errors is false or omitted, such errors are handled by calling a handler specified by onerror.

Example 1. and the subdirectories are as follows.

# Parent directory:

# Directory inside parent directory:

# File inside subdirectory:

We want to delete the authors directory. Below is the implementation.

# Python program for demonstration
# shutil.rmtree ()

 

import shutil 

import os 

 
# location

location = "D: / Pycharm projects / Python.Engineering /"

 
# catalog

dir = "Authors"

 
# path

path = os.path.join ( (location, dir

  
# deleting a directory
shutil.rmtree (path) 

Exit:

Example 2: by passing ignore_errors = False .

# Python program for demonstration
# shutil.rmtree ()

 

import shutil 

import os 

  
# location

location = "D: / Pycharm projects / Python.Engineering /"

  
# catalog

dir = "Authors"

 
# path

path = os.path.join ( (location, dir

  
# deleting the directory

shutil.rmtree (path, ignore_errors = False

  
# make ignore_errors = True won't boost
# a FileNotFoundError

Exit:

Traceback (most recent call last):
File “D: / Pycharm projects / gfg / gfg.py”, line 16, in
shutil.rmtree (path, ignore_errors = False)
File “C: UsersNikhil AggarwalAppDataLocalProgramsPythonPython38-32libshutil.py”, line 730, in rmtree
return _rmtree_unsafe (path, onerror)
File “C: UsersNikhil, AggarwalAppDataLocalPrograms38-Python _rmtree_unsafe
onerror (os.scandir, path, sys.exc_info ())
File “C: UsersNikhil AggarwalAppDataLocalProgramsPythonPython38-32libshutil.py”, line 586, in _rmtree_unsafe
(with path os.scandir ) as scandir_it:
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'D: / Pycharm projects / Python.Engineering / Authors'

  Example 3: passing onerror
A function must be passed to onerror , which must contain three parameters.

  • function: function that caused the exception.
  • path: passed the path that caused the deletion exception
  • excinfo: information about the exception called by sys.exc_info ()

Below is a description.

# Python program for demonstration
# shutil.rmtree ()

  

import shutil 

import os 

 

 
# exception handler

def handler (func, path, exc_info): 

print ( "Inside handler"

print (exc_info) 

 

 
# place location

location = "D: / Pycharm projects / Python.Engineering / "

  
# directory

dir = "Authors"

 
# path

path = os.path.join ( (location, dir

 
# deleting directory

shutil.rmtree (path, onerror = handler) 

Exit:

Inside handler
(, FileNotFoundError (2, 'The system cannot find the path specified'),)
Inside handler
(, FileNotFoundError (2, 'The system cannot find the file specified'), )





Delete an entire directory tree with Python | shutil.rmtree () method: StackOverflow Questions

Answer #1

Python syntax to delete a file

import os
os.remove("/tmp/<file_name>.txt")

Or

import os
os.unlink("/tmp/<file_name>.txt")

Or

pathlib Library for Python version >= 3.4

file_to_rem = pathlib.Path("/tmp/<file_name>.txt")
file_to_rem.unlink()

Path.unlink(missing_ok=False)

Unlink method used to remove the file or the symbolik link.

If missing_ok is false (the default), FileNotFoundError is raised if the path does not exist.
If missing_ok is true, FileNotFoundError exceptions will be ignored (same behavior as the POSIX rm -f command).
Changed in version 3.8: The missing_ok parameter was added.

Best practice

  1. First, check whether the file or folder exists or not then only delete that file. This can be achieved in two ways :
    a. os.path.isfile("/path/to/file")
    b. Use exception handling.

EXAMPLE for os.path.isfile

#!/usr/bin/python
import os
myfile="/tmp/foo.txt"

## If file exists, delete it ##
if os.path.isfile(myfile):
    os.remove(myfile)
else:    ## Show an error ##
    print("Error: %s file not found" % myfile)

Exception Handling

#!/usr/bin/python
import os

## Get input ##
myfile= raw_input("Enter file name to delete: ")

## Try to delete the file ##
try:
    os.remove(myfile)
except OSError as e:  ## if failed, report it back to the user ##
    print ("Error: %s - %s." % (e.filename, e.strerror))

RESPECTIVE OUTPUT

Enter file name to delete : demo.txt
Error: demo.txt - No such file or directory.

Enter file name to delete : rrr.txt
Error: rrr.txt - Operation not permitted.

Enter file name to delete : foo.txt

Python syntax to delete a folder

shutil.rmtree()

Example for shutil.rmtree()

#!/usr/bin/python
import os
import sys
import shutil

# Get directory name
mydir= raw_input("Enter directory name: ")

## Try to remove tree; if failed show an error using try...except on screen
try:
    shutil.rmtree(mydir)
except OSError as e:
    print ("Error: %s - %s." % (e.filename, e.strerror))

Answer #2

Tensorflow 2 Docs

Saving Checkpoints

Adapted from the docs

# -------------------------
# -----  Toy Context  -----
# -------------------------
import tensorflow as tf


class Net(tf.keras.Model):
    """A simple linear model."""

    def __init__(self):
        super(Net, self).__init__()
        self.l1 = tf.keras.layers.Dense(5)

    def call(self, x):
        return self.l1(x)


def toy_dataset():
    inputs = tf.range(10.0)[:, None]
    labels = inputs * 5.0 + tf.range(5.0)[None, :]
    return (
        tf.data.Dataset.from_tensor_slices(dict(x=inputs, y=labels)).repeat().batch(2)
    )


def train_step(net, example, optimizer):
    """Trains `net` on `example` using `optimizer`."""
    with tf.GradientTape() as tape:
        output = net(example["x"])
        loss = tf.reduce_mean(tf.abs(output - example["y"]))
    variables = net.trainable_variables
    gradients = tape.gradient(loss, variables)
    optimizer.apply_gradients(zip(gradients, variables))
    return loss


# ----------------------------
# -----  Create Objects  -----
# ----------------------------

net = Net()
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(
    step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator
)
manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)

# ----------------------------
# -----  Train and Save  -----
# ----------------------------

ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
    print("Restored from {}".format(manager.latest_checkpoint))
else:
    print("Initializing from scratch.")

for _ in range(50):
    example = next(iterator)
    loss = train_step(net, example, opt)
    ckpt.step.assign_add(1)
    if int(ckpt.step) % 10 == 0:
        save_path = manager.save()
        print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
        print("loss {:1.2f}".format(loss.numpy()))


# ---------------------
# -----  Restore  -----
# ---------------------

# In another script, re-initialize objects
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(
    step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator
)
manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)

# Re-use the manager code above ^

ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
    print("Restored from {}".format(manager.latest_checkpoint))
else:
    print("Initializing from scratch.")

for _ in range(50):
    example = next(iterator)
    # Continue training or evaluate etc.

More links

Checkpoints capture the exact value of all parameters (tf.Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.

The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).

(Highlights are my own)


Tensorflow < 2


From the docs:

Save

# Create some variables.
v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer)

inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)

# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()

# Add ops to save and restore all the variables.
saver = tf.train.Saver()

# Later, launch the model, initialize the variables, do some work, and save the
# variables to disk.
with tf.Session() as sess:
  sess.run(init_op)
  # Do some work with the model.
  inc_v1.op.run()
  dec_v2.op.run()
  # Save the variables to disk.
  save_path = saver.save(sess, "/tmp/model.ckpt")
  print("Model saved in path: %s" % save_path)

Restore

tf.reset_default_graph()

# Create some variables.
v1 = tf.get_variable("v1", shape=[3])
v2 = tf.get_variable("v2", shape=[5])

# Add ops to save and restore all the variables.
saver = tf.train.Saver()

# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
  # Restore variables from disk.
  saver.restore(sess, "/tmp/model.ckpt")
  print("Model restored.")
  # Check the values of the variables
  print("v1 : %s" % v1.eval())
  print("v2 : %s" % v2.eval())

simple_save

Many good answer, for completeness I"ll add my 2 cents: simple_save. Also a standalone code example using the tf.data.Dataset API.

Python 3 ; Tensorflow 1.14

import tensorflow as tf
from tensorflow.saved_model import tag_constants

with tf.Graph().as_default():
    with tf.Session() as sess:
        ...

        # Saving
        inputs = {
            "batch_size_placeholder": batch_size_placeholder,
            "features_placeholder": features_placeholder,
            "labels_placeholder": labels_placeholder,
        }
        outputs = {"prediction": model_output}
        tf.saved_model.simple_save(
            sess, "path/to/your/location/", inputs, outputs
        )

Restoring:

graph = tf.Graph()
with restored_graph.as_default():
    with tf.Session() as sess:
        tf.saved_model.loader.load(
            sess,
            [tag_constants.SERVING],
            "path/to/your/location/",
        )
        batch_size_placeholder = graph.get_tensor_by_name("batch_size_placeholder:0")
        features_placeholder = graph.get_tensor_by_name("features_placeholder:0")
        labels_placeholder = graph.get_tensor_by_name("labels_placeholder:0")
        prediction = restored_graph.get_tensor_by_name("dense/BiasAdd:0")

        sess.run(prediction, feed_dict={
            batch_size_placeholder: some_value,
            features_placeholder: some_other_value,
            labels_placeholder: another_value
        })

Standalone example

Original blog post

The following code generates random data for the sake of the demonstration.

  1. We start by creating the placeholders. They will hold the data at runtime. From them, we create the Dataset and then its Iterator. We get the iterator"s generated tensor, called input_tensor which will serve as input to our model.
  2. The model itself is built from input_tensor: a GRU-based bidirectional RNN followed by a dense classifier. Because why not.
  3. The loss is a softmax_cross_entropy_with_logits, optimized with Adam. After 2 epochs (of 2 batches each), we save the "trained" model with tf.saved_model.simple_save. If you run the code as is, then the model will be saved in a folder called simple/ in your current working directory.
  4. In a new graph, we then restore the saved model with tf.saved_model.loader.load. We grab the placeholders and logits with graph.get_tensor_by_name and the Iterator initializing operation with graph.get_operation_by_name.
  5. Lastly we run an inference for both batches in the dataset, and check that the saved and restored model both yield the same values. They do!

Code:

import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.python.saved_model import tag_constants


def model(graph, input_tensor):
    """Create the model which consists of
    a bidirectional rnn (GRU(10)) followed by a dense classifier

    Args:
        graph (tf.Graph): Tensors" graph
        input_tensor (tf.Tensor): Tensor fed as input to the model

    Returns:
        tf.Tensor: the model"s output layer Tensor
    """
    cell = tf.nn.rnn_cell.GRUCell(10)
    with graph.as_default():
        ((fw_outputs, bw_outputs), (fw_state, bw_state)) = tf.nn.bidirectional_dynamic_rnn(
            cell_fw=cell,
            cell_bw=cell,
            inputs=input_tensor,
            sequence_length=[10] * 32,
            dtype=tf.float32,
            swap_memory=True,
            scope=None)
        outputs = tf.concat((fw_outputs, bw_outputs), 2)
        mean = tf.reduce_mean(outputs, axis=1)
        dense = tf.layers.dense(mean, 5, activation=None)

        return dense


def get_opt_op(graph, logits, labels_tensor):
    """Create optimization operation from model"s logits and labels

    Args:
        graph (tf.Graph): Tensors" graph
        logits (tf.Tensor): The model"s output without activation
        labels_tensor (tf.Tensor): Target labels

    Returns:
        tf.Operation: the operation performing a stem of Adam optimizer
    """
    with graph.as_default():
        with tf.variable_scope("loss"):
            loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
                    logits=logits, labels=labels_tensor, name="xent"),
                    name="mean-xent"
                    )
        with tf.variable_scope("optimizer"):
            opt_op = tf.train.AdamOptimizer(1e-2).minimize(loss)
        return opt_op


if __name__ == "__main__":
    # Set random seed for reproducibility
    # and create synthetic data
    np.random.seed(0)
    features = np.random.randn(64, 10, 30)
    labels = np.eye(5)[np.random.randint(0, 5, (64,))]

    graph1 = tf.Graph()
    with graph1.as_default():
        # Random seed for reproducibility
        tf.set_random_seed(0)
        # Placeholders
        batch_size_ph = tf.placeholder(tf.int64, name="batch_size_ph")
        features_data_ph = tf.placeholder(tf.float32, [None, None, 30], "features_data_ph")
        labels_data_ph = tf.placeholder(tf.int32, [None, 5], "labels_data_ph")
        # Dataset
        dataset = tf.data.Dataset.from_tensor_slices((features_data_ph, labels_data_ph))
        dataset = dataset.batch(batch_size_ph)
        iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
        dataset_init_op = iterator.make_initializer(dataset, name="dataset_init")
        input_tensor, labels_tensor = iterator.get_next()

        # Model
        logits = model(graph1, input_tensor)
        # Optimization
        opt_op = get_opt_op(graph1, logits, labels_tensor)

        with tf.Session(graph=graph1) as sess:
            # Initialize variables
            tf.global_variables_initializer().run(session=sess)
            for epoch in range(3):
                batch = 0
                # Initialize dataset (could feed epochs in Dataset.repeat(epochs))
                sess.run(
                    dataset_init_op,
                    feed_dict={
                        features_data_ph: features,
                        labels_data_ph: labels,
                        batch_size_ph: 32
                    })
                values = []
                while True:
                    try:
                        if epoch < 2:
                            # Training
                            _, value = sess.run([opt_op, logits])
                            print("Epoch {}, batch {} | Sample value: {}".format(epoch, batch, value[0]))
                            batch += 1
                        else:
                            # Final inference
                            values.append(sess.run(logits))
                            print("Epoch {}, batch {} | Final inference | Sample value: {}".format(epoch, batch, values[-1][0]))
                            batch += 1
                    except tf.errors.OutOfRangeError:
                        break
            # Save model state
            print("
Saving...")
            cwd = os.getcwd()
            path = os.path.join(cwd, "simple")
            shutil.rmtree(path, ignore_errors=True)
            inputs_dict = {
                "batch_size_ph": batch_size_ph,
                "features_data_ph": features_data_ph,
                "labels_data_ph": labels_data_ph
            }
            outputs_dict = {
                "logits": logits
            }
            tf.saved_model.simple_save(
                sess, path, inputs_dict, outputs_dict
            )
            print("Ok")
    # Restoring
    graph2 = tf.Graph()
    with graph2.as_default():
        with tf.Session(graph=graph2) as sess:
            # Restore saved values
            print("
Restoring...")
            tf.saved_model.loader.load(
                sess,
                [tag_constants.SERVING],
                path
            )
            print("Ok")
            # Get restored placeholders
            labels_data_ph = graph2.get_tensor_by_name("labels_data_ph:0")
            features_data_ph = graph2.get_tensor_by_name("features_data_ph:0")
            batch_size_ph = graph2.get_tensor_by_name("batch_size_ph:0")
            # Get restored model output
            restored_logits = graph2.get_tensor_by_name("dense/BiasAdd:0")
            # Get dataset initializing operation
            dataset_init_op = graph2.get_operation_by_name("dataset_init")

            # Initialize restored dataset
            sess.run(
                dataset_init_op,
                feed_dict={
                    features_data_ph: features,
                    labels_data_ph: labels,
                    batch_size_ph: 32
                }

            )
            # Compute inference for both batches in dataset
            restored_values = []
            for i in range(2):
                restored_values.append(sess.run(restored_logits))
                print("Restored values: ", restored_values[i][0])

    # Check if original inference and restored inference are equal
    valid = all((v == rv).all() for v, rv in zip(values, restored_values))
    print("
Inferences match: ", valid)

This will print:

$ python3 save_and_restore.py

Epoch 0, batch 0 | Sample value: [-0.13851789 -0.3087595   0.12804556  0.20013677 -0.08229901]
Epoch 0, batch 1 | Sample value: [-0.00555491 -0.04339041 -0.05111827 -0.2480045  -0.00107776]
Epoch 1, batch 0 | Sample value: [-0.19321944 -0.2104792  -0.00602257  0.07465433  0.11674127]
Epoch 1, batch 1 | Sample value: [-0.05275984  0.05981954 -0.15913513 -0.3244143   0.10673307]
Epoch 2, batch 0 | Final inference | Sample value: [-0.26331693 -0.13013336 -0.12553    -0.04276478  0.2933622 ]
Epoch 2, batch 1 | Final inference | Sample value: [-0.07730117  0.11119192 -0.20817074 -0.35660955  0.16990358]

Saving...
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"/some/path/simple/saved_model.pb"
Ok

Restoring...
INFO:tensorflow:Restoring parameters from b"/some/path/simple/variables/variables"
Ok
Restored values:  [-0.26331693 -0.13013336 -0.12553    -0.04276478  0.2933622 ]
Restored values:  [-0.07730117  0.11119192 -0.20817074 -0.35660955  0.16990358]

Inferences match:  True

Answer #3

Here is a robust function that uses both os.remove and shutil.rmtree:

def remove(path):
    """ param <path> could either be relative or absolute. """
    if os.path.isfile(path) or os.path.islink(path):
        os.remove(path)  # remove the file
    elif os.path.isdir(path):
        shutil.rmtree(path)  # remove dir and all contains
    else:
        raise ValueError("file {} is not a file or dir.".format(path))

Answer #4


Path objects from the Python 3.4+ pathlib module also expose these instance methods:

Answer #5

import shutil

shutil.rmtree("/folder_name")

Standard Library Reference: shutil.rmtree.

By design, rmtree fails on folder trees containing read-only files. If you want the folder to be deleted regardless of whether it contains read-only files, then use

shutil.rmtree("/folder_name", ignore_errors=True)

Answer #6

You can use the built-in ast.literal_eval:

>>> import ast
>>> ast.literal_eval("{"muffin" : "lolz", "foo" : "kitty"}")
{"muffin": "lolz", "foo": "kitty"}

This is safer than using eval. As its own docs say:

>>> help(ast.literal_eval)
Help on function literal_eval in module ast:

literal_eval(node_or_string)
    Safely evaluate an expression node or a string containing a Python
    expression.  The string or node provided may only consist of the following
    Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
    and None.

For example:

>>> eval("shutil.rmtree("mongo")")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<string>", line 1, in <module>
  File "/opt/Python-2.6.1/lib/python2.6/shutil.py", line 208, in rmtree
    onerror(os.listdir, path, sys.exc_info())
  File "/opt/Python-2.6.1/lib/python2.6/shutil.py", line 206, in rmtree
    names = os.listdir(path)
OSError: [Errno 2] No such file or directory: "mongo"
>>> ast.literal_eval("shutil.rmtree("mongo")")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/Python-2.6.1/lib/python2.6/ast.py", line 68, in literal_eval
    return _convert(node_or_string)
  File "/opt/Python-2.6.1/lib/python2.6/ast.py", line 67, in _convert
    raise ValueError("malformed string")
ValueError: malformed string

Answer #7

import os, shutil
folder = "/path/to/folder"
for filename in os.listdir(folder):
    file_path = os.path.join(folder, filename)
    try:
        if os.path.isfile(file_path) or os.path.islink(file_path):
            os.unlink(file_path)
        elif os.path.isdir(file_path):
            shutil.rmtree(file_path)
    except Exception as e:
        print("Failed to delete %s. Reason: %s" % (file_path, e))

Answer #8

Try shutil.rmtree:

import shutil
shutil.rmtree("/path/to/your/dir/")

Answer #9

  1. I believe this has already been answered by other users before me, so I only add it for the sake of completeness: the with statement simplifies exception handling by encapsulating common preparation and cleanup tasks in so-called context managers. More details can be found in PEP 343. For instance, the open statement is a context manager in itself, which lets you open a file, keep it open as long as the execution is in the context of the with statement where you used it, and close it as soon as you leave the context, no matter whether you have left it because of an exception or during regular control flow. The with statement can thus be used in ways similar to the RAII pattern in C++: some resource is acquired by the with statement and released when you leave the with context.

  2. Some examples are: opening files using with open(filename) as fp:, acquiring locks using with lock: (where lock is an instance of threading.Lock). You can also construct your own context managers using the contextmanager decorator from contextlib. For instance, I often use this when I have to change the current directory temporarily and then return to where I was:

    from contextlib import contextmanager
    import os
    
    @contextmanager
    def working_directory(path):
        current_dir = os.getcwd()
        os.chdir(path)
        try:
            yield
        finally:
            os.chdir(current_dir)
    
    with working_directory("data/stuff"):
        # do something within data/stuff
    # here I am back again in the original working directory
    

    Here"s another example that temporarily redirects sys.stdin, sys.stdout and sys.stderr to some other file handle and restores them later:

    from contextlib import contextmanager
    import sys
    
    @contextmanager
    def redirected(**kwds):
        stream_names = ["stdin", "stdout", "stderr"]
        old_streams = {}
        try:
            for sname in stream_names:
                stream = kwds.get(sname, None)
                if stream is not None and stream != getattr(sys, sname):
                    old_streams[sname] = getattr(sys, sname)
                    setattr(sys, sname, stream)
            yield
        finally:
            for sname, stream in old_streams.iteritems():
                setattr(sys, sname, stream)
    
    with redirected(stdout=open("/tmp/log.txt", "w")):
         # these print statements will go to /tmp/log.txt
         print "Test entry 1"
         print "Test entry 2"
    # back to the normal stdout
    print "Back to normal stdout again"
    

    And finally, another example that creates a temporary folder and cleans it up when leaving the context:

    from tempfile import mkdtemp
    from shutil import rmtree
    
    @contextmanager
    def temporary_dir(*args, **kwds):
        name = mkdtemp(*args, **kwds)
        try:
            yield name
        finally:
            shutil.rmtree(name)
    
    with temporary_dir() as dirname:
        # do whatever you want
    

Answer #10

You can delete the folder itself, as well as all its contents, using shutil.rmtree:

import shutil
shutil.rmtree("/path/to/folder")
shutil.rmtree(path, ignore_errors=False, onerror=None)


Delete an entire directory tree; path must point to a directory (but not a symbolic link to a directory). If ignore_errors is true, errors resulting from failed removals will be ignored; if false or omitted, such errors are handled by calling a handler specified by onerror or, if that is omitted, they raise an exception.

Tutorials