Skip to content
Training

Runway ML Video Generator

What is Runway ML?

RunwayML is an AI-powered platform designed for creatives in video and image production, offering features like text-to-video, image manipulation, and real-time collaboration.

 

Training Presentation

Training Recording 

Runway ML Training

Dashboard

Home

Home is the landing page for your account and how you’ll access all of Runway’s most popular tools. You can navigate to the tool of your choice here, or search for it directly using the search bar at the top of your screen.

Additionally, any tool can be pinned by using the star icon () for quick access from the homepage.

Workspace and Account Navigation

Manage Workspace-related settings with the user icon in the top left corner.

Manage your personal account-related details with the user icon in the top right corner.

Runway Watch All films on Runway Watch are made and submitted entirely by Runway creators — if you need inspiration or just some creative food for thought, hop into Runway Watch and preview the different channels of content.

Library

Library is where you’ll access, organise, share, and manage generated content, Custom Elements, Video Editor Projects, and more.

Assets

The Assets tab contains everything you generate and upload to Runway. While most of your generations will be sorted automatically, you're also able to sort content and create folders to your personal liking.

Workspace Assets

Any assets shared with a Workspace can be found here.

Favorite Assets

Assets favourited by clicking the heart icon () can be found here. We recommend favouriting your favourite generated assets to locate them with ease.

Video Editor Projects

Runway's Video Editor Projects allow you to combine multiple video clips to create, edit, and export a composited video project. In this space, you can create and access your Video Editor Projects.

Tools

Quickly access our most popular tools:

  • Generative Session — Start a new Session to create with the latest generative video tools such as Gen-4 Image to Video, Text/Image to Video, Video to Video, Act-One, Expand Video, and more.
  • Generative Audio — Create with tools like Lip Sync, Text to Speech, and Speech to Speech.
  • Text to Image — Create images with a text prompt. NOTE: We will have access to Midjourney, I would recommend using Midjourney for image generation as it is a better platform than RunwayML

Gen-4 Model

Gen-4 creates fast, controllable and flexible video generation that can seamlessly sit beside live action, animated and VFX content. Gen-4 creates videos in 5 or 10 second durations based on an input image and text prompt you provide.

Prompting Basics

This section covers our recommended approach to prompting, but experimenting with prompt variations and patterns will allow you to discover what works best for your inputs and desired outcome.

Prompting for Iteration

The Gen-4 model thrives on prompt simplicity. Rather than starting with an overly complex prompt, we recommend beginning your session with a simple prompt, and iterating by adding more details as needed.

Begin with a foundational prompt that captures only the most essential motion to the scene. Once your basic motion works well, try adding different prompt elements to further refine the output:

  • Subject motion
  • Camera motion
  • Scene motion
  • Style descriptors

Adding one new element at a time will help you identify which additions improve your video, understand how different elements interact, and more effectively troubleshoot unexpected results.

Below is an example prompt that conveys all ingredients:

Prompt

Input image

Output

a handheld camera tracks the mechanical bull as it runs across the desert. the movement disturbs dust that trails behind the mechanical creature. cinematic live-action.

Runway ML Gen 4

 

Gen-3 Model

Camera Control

Gen 4 model is an improvement on the Gen 3 model however there are settings that are not available in Gen 4. A useful tool that is available in Gen 3 Turbo is Camera Control where you can use your mouse to determine how close you zoom in or camera tracking rather than using prompts to describe the action. See below documentation to explain:

Camera Control is currently available on the Gen-3 Alpha Turbo model, so select this model from the bottom left corner dropdown.

Next, select Camera from the left-hand toolbar:

This will bring you to the Camera Control prompting window where you'll upload an image, write your text prompt, and configure the control values.

Camera Control Directions

Camera Control has six movement direction options, as well as a Static Camera checkbox that prevents camera movement:

Direction

Description

Text Prompt

Example Output

Horizontal

Camera moves across the X axis

camera glides right

Runway ML Gen 3 Camera Control Horizontal

 

Vertical

Camera moves across the Y axis

camera slightly glides up

Runway ML Gen 3 Camera Control Vertical

 

Pan

Camera turns horizontally from a fixed point

camera pans to position directly in front of the woman

Runway ML Gen 3 Camera Control Pan

 

Tilt

Camera tilts vertically from a fixed point

camera tilts to an upwards angle

Runway ML Gen 3 Camera Control Tilt

 

Zoom

Camera moves closer or further from focal point

camera zooms out

Runway ML Gen 3 Camera Control Zoom

 

Roll

Camera rotates from a fixed point

camera rotates to the right while maintaining focus on the subject

Runway ML Gen 3 Camera Control Roll

 

 

Multiple camera controls can be combined for more complex camera motion. Pairing similar controls, such as Pan with Horizontal controls or Tilt with Vertical, can further improve the results.

Camera Control Values

The values for each setting will represent the speed and intensity of camera motion.

Each camera control has a default value of 0 to indicate that the control type is not active. Camera controls are not referenced at all if there are no active values. 

For example, if you type "zoom out" in your prompt but don't use camera controls, the 0's won't override your text prompt. Furthermore, setting all values to 0 would not result in an output with static camera motion.

The further the value is from 0, the more camera movement you’ll receive in your output.

Configuring a horizontal value of 10 would provide the most intense movement to the right, and -10 would indicate the most intense movement to the left.

Value

Camera Speed

Example Value

Example Prompt

Output

0.1-1

Minimal

Zoom: 0.1

camera slightly zooms. natural motion.

Runway ML Gen 3 Camera Control Value minimall

 

2-3

Subtle

Zoom: 2.0

camera slightly zooms. clouds and grass flow in the wind.


Runway ML Gen 3 Camera Control Value subtle

 

4-6

Moderate

Zoom: 5.0

camera zooms. clouds and grass flow in the wind.

Runway ML Gen 3 Camera Control Value moderate

7-10

Intense

Zoom: 10.0

camera soars at hyperspeed as it zooms into the monument.

Runway ML Gen 3 Camera Control Value intense

 

 

How Camera Controls Interact with your Input Image

Camera Control values are not rigid and vary depending on how far a subject is from the camera. This means a certain numeric value for a subject-focused image will not provide the same style of results for a completely different scene or environmental-focused input image.

Text prompts

While not required, complimenting the camera controls with a text prompt will greatly improve controllability and overall adherence to the movement you envision, especially when using higher Camera Control values.

For example, an intense zoom out shot might benefit from a text prompt that describes the desired scene at the end of the clip. If these details aren’t provided, the model will fill in the remaining scene to the best of its ability, which could lead to less intentional results.

Your text prompt can also be used to indicate character and scene motion.

The table below includes an example with no prompt, an end scene description, and a subject motion prompt:

Direction, Value

Text Prompt

Output

Zoom: -10

(none)

Runway ML Gen 3 Text prompt with no direction value

 

Zoom: -10

the camera zooms out. the subject stands in a clearing surrounded by a tall population of matured cacti.

Runway ML Gen 3 Text prompt with 1 direction value

 

Zoom: -10

the camera zooms out as the subject begins running towards the camera.

Runway ML Gen 3 Text prompt with 2 direction value

 

 

Static Camera

The Static Camera checkbox will help reduce camera motion in your output video. This setting will yield the most consistent results when using realistic and cinematic input images, but don't be afraid to experiment with different types of inputs.

Including a text prompt to guide subject and scene motion is recommended when using this setting:

Text Prompt

Output

None

Runway ML Gen Static Camera None

 

the woman dynamically swings back and forth. she gently kicks out her legs and swings towards and away from the camera. dynamic motion.

Runway ML Gen Static Camera None

 

 

Video Expansion

Expand Video reimagines and reframes existing videos into different formats. 

Resizing an asset with traditional cropping often leads to important details being removed. Instead of removing details, Expand Video adds beyond the edges of an input video to generate into a new format.

This article outlines the steps to create with Expand Video, different prompting approaches, and more.

Spec Information

Spec

Gen-3 Alpha Turbo

Cost

5 seconds or less: 25 credits
Longer than 5 seconds: 50 credits

Maximum duration

10 seconds

Minimum input dimensions

620x620

Maximum input size

64mb

Explore Mode on Unlimited Plans

Yes

Platform availability

Web

Output resolutions

1280x768
768x1280

 

Best Practices for Input Videos

  • Close to a traditional aspect ratio
  • Minimum of 620 pixels in width and height
  • Subject (if any) is close to the center of frame
  • No graphics or text
  • Used with a text prompt or matching guidance image

Step 1 – Selecting a Video to Expand

Begin by navigating to Generative Session in your Runway Dashboard and select Expand Video from the left-hand tool bar.

Next, select the video to expand. You can choose an existing video directly from your Assets, or upload a new video by dragging and dropping. Use a video that follows the best practices for

Expand Video will reframe the input video to either a landscape or vertical format, depending on the dimensions of the input video you select:

Input Video

Output Aspect Ratio

Landscape

768x1280 (vertical)

Vertical

1280x768 (landscape)

Square

768x1280 or 1280x720

In summary, choosing an input video with a larger width than height will create a vertical output, and a larger height than width will result in a landscape output. You can choose between a vertical or landscape output when using a square input video.

Input videos must be a minimum of 620 pixels in width and height.

Videos that are much narrower than traditional aspect ratios in either direction are not accepted. For example, a 620x1280 video is 100px narrower than a traditional 9:16 (720x1280) vertical video and would result in an error upon selection.

After selecting the video, you’ll see a boundary box preview that indicates the updated dimensions you’ll receive after running the generation.

You can choose the desired output size with the dimension selection button when using a square input video:

Step 2 – Configuring the Expand Video Prompt

Expand Video will work well in most cases without a prompt, but including one will give you more control over the output.

Using a Text Prompt

When writing your Expand Video prompt, focus primarily on describing what you want to see in the expanded areas.

While the input video will guide most of the scene details automatically, you can make brief references to it to ensure visual consistency. However, avoid detailed descriptions of the existing video content, as this may lead to unexpected results.

Using a Guidance Image

You can also add a guidance image by selecting the Add First Frame icon or dragging and dropping an image to the canvas.

Using an expanded image of the input video’s first frame will provide the best results when using a guidance image. You do not need to include a text prompt when using an input image to guide the generation.

If you use an image that doesn't exactly match or align with the video's first frame, it may cause a brief visual jump when the output video starts playing. If this occurs, ensure that your image aligns properly with the video or use a text prompt instead. Alternatively, you could also trim the first frame from the video.

Below are examples of the same input video with different prompting approaches:

Input video

Text prompt

Guidance Image

Output

Runway ML Gen 3 input video

 

None

None

Runway MLGen 3 input video NONE

 

In a cave with lava in the foreground

None

Runway MLGen 3 input video Text

 

None

Runway MLGen 3 input video Guidance Image

 

Runway MLGen 3 input video Guidance Image

 

 

Step 3 – Generating and Reiterating the Output

Before beginning the generation, you can hover over the Duration icon to see the estimated credit costs.

After confirming the credit costs, click Generate to start processing the Expand Video generation.

Your generations will be scrollable through your session as you continue to generate. You can also access completed videos in your Assets, where they will save to the Generative Video folder by default.

Expanding a Video Multiple Times

You can expand a video multiple times by clicking the dropdown next to Reuse settings and then selecting Expand Video on a previously expanded output.

Creating Keyframes

Keyframes allow you to configure the starting, middle, and/or ending frames to create smooth transitions between them in a Gen-3 Alpha. In Gen-3 Alpha Turbo, you can input the first and last keyframe.

Step 1 – Selecting the Input Keyframes

Begin by navigating to Generative Session in your Dashboard.

From here, make sure either the Gen-3 Alpha or Gen-3 Alpha Turbo model is selected from the bottom left corner dropdown.

Drag and drop a new image or select an existing image from your Assets to configure your First Keyframe. Selecting the image will populate the Keyframe editor:

If you're in Turbo, click the empty Keyframes to add additional images. You can also drag and drop images here to upload them. 

You can hover each input with your cursor to reveal controls to move or remove a keyframe:

When choosing Keyframe images, keep in mind that the ability to receive the desired transition will be highly dependent on the complexity of your input images:

  • Images that share a similar subject, scene, and style will offer more consistent, natural, and smooth results. 
  • Images that greatly vary in subject, scene, or style may create more experimental or unexpected results.

With your images uploaded, you’re now ready to draft your prompt.

Step 2 – Drafting the Prompt

We strongly recommend including a text prompt before generating. 

You can start Keyframe generations without a prompt, but including a clear description that outlines the desired style of motion will offer more controllability and set your generation up for success.

Try to keep your prompt focused on the motion needed to take your video from the first frame to the last frame.

Step 3 – Generating the Video

After uploading your keyframes and drafting your prompt, you’re now ready to generate your video.

Select the desired duration from the dropdown menu next to the Generate button.

When choosing a duration, it may be helpful to once again consider the amount of difference between the two frames. 

More complex transitions, such as cases where the Last frame is completely different from the First frame, may benefit from the longer 10s duration. This would give the generation more time to smoothly transition between the two inputs.

Alternatively, choosing a 5s duration for completely different keyframes may result in more abrupt changes.

Click the Generate button once you feel content with your chosen settings.

Your generations will be scrollable through your session as you continue to generate. You can also access completed videos in your Assets, where they will save to the Generative Video folder by default.