Getting Started
Document Creation
Working with Content
Writing a Research Paper in LaTeX
Complete guide to writing academic research papers in LaTeX. Learn structure, citations, formatting requirements, and submission preparation for journals and conferences.
Master the complete workflow for writing professional research papers in LaTeX. This guide covers paper structure, academic writing conventions, bibliography management, journal requirements, and submission preparation.
Prerequisites: Basic LaTeX knowledge, understanding of academic writing
Time to complete: 40-45 minutes
Difficulty: Intermediate to Advanced
What you’ll learn: Paper structure, citations, formatting, journal templates, and submission process
Research Paper Overview
Standard Paper Structure
Front Matter
Title, authors, abstract, keywords
Main Content
Introduction, methods, results, discussion
Back Matter
Conclusions, references, appendices
Supplementary
Data, code, additional figures
Planning Your Paper
% Typical structure:
% - Title page
% - Abstract (150-250 words)
% - Keywords (3-7 terms)
% - Introduction
% - Related Work
% - Methodology
% - Results
% - Discussion
% - Conclusion
% - References
% - Appendices (optional)
% Typical structure:
% - Title page
% - Abstract (150-250 words)
% - Keywords (3-7 terms)
% - Introduction
% - Related Work
% - Methodology
% - Results
% - Discussion
% - Conclusion
% - References
% - Appendices (optional)
% Common structure:
% - Title and authors
% - Abstract (100-200 words)
% - Introduction
% - Background
% - Approach/Method
% - Evaluation
% - Related Work
% - Conclusion
% - References
% Page limit: 6-10 pages
% Extended structure:
% - Cover page
% - Executive summary
% - Table of contents
% - Introduction
% - Literature review
% - Methodology
% - Results
% - Analysis
% - Recommendations
% - Conclusion
% - References
% - Appendices
Document Setup
Basic Research Paper Template
\documentclass[11pt, a4paper]{article}
% Essential packages
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath, amssymb, amsthm}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{algorithm2e}
\usepackage{listings}
\usepackage{hyperref}
\usepackage{cleveref}
% Bibliography
\usepackage[
backend=biber,
style=authoryear-comp,
sorting=nyt,
natbib=true
]{biblatex}
\addbibresource{references.bib}
% Custom commands
\newcommand{\keywords}[1]{\par\noindent\textbf{Keywords:} #1}
\newcommand{\email}[1]{\href{mailto:#1}{\texttt{#1}}}
% Theorem environments
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
% Document metadata
\title{Your Research Paper Title: A Comprehensive Study of Important Topics}
\author{
First Author\thanks{Corresponding author}\textsuperscript{1} \and
Second Author\textsuperscript{2} \and
Third Author\textsuperscript{1,2}
}
\date{}
\begin{document}
\maketitle
% Author affiliations
\begin{center}
\textsuperscript{1}Department of Computer Science, University Name\\
\textsuperscript{2}Research Institute, City, Country\\
\email{first.author@university.edu}
\end{center}
\begin{abstract}
The abstract should be a self-contained summary of your paper, typically 150-250 words. It should include: (1) motivation and problem statement, (2) approach/methodology, (3) main results, and (4) conclusions. Avoid citations in the abstract.
\end{abstract}
\keywords{keyword1, keyword2, keyword3, keyword4, keyword5}
\section{Introduction}
\label{sec:introduction}
The introduction should provide context and motivate your research. Start with the broad context, narrow down to your specific problem, state your contributions clearly, and outline the paper structure.
\subsection{Motivation}
Explain why this research is important...
\subsection{Contributions}
Our main contributions are:
\begin{itemize}
\item First contribution with brief description
\item Second contribution with impact
\item Third contribution and its novelty
\end{itemize}
\subsection{Paper Organization}
The remainder of this paper is organized as follows. \Cref{sec:related} reviews related work. \Cref{sec:methodology} presents our methodology. \Cref{sec:results} shows experimental results. \Cref{sec:discussion} discusses implications. \Cref{sec:conclusion} concludes the paper.
\section{Related Work}
\label{sec:related}
Review relevant literature, grouping by themes or approaches. Show how your work differs from and builds upon existing research.
\section{Methodology}
\label{sec:methodology}
Describe your approach in detail, allowing others to reproduce your work.
\section{Results}
\label{sec:results}
Present your findings objectively with appropriate visualizations.
\section{Discussion}
\label{sec:discussion}
Interpret results, discuss limitations, and suggest future work.
\section{Conclusion}
\label{sec:conclusion}
Summarize key findings and contributions.
\printbibliography
\appendix
\section{Supplementary Material}
Additional details, proofs, or data.
\end{document}
Writing Best Practices
Academic Writing Style
% Professional academic writing conventions
% Clear, concise sentences
We present a novel algorithm for graph analysis. % Good
In this paper, we are going to present and discuss
a new and novel algorithm that we developed for
the purpose of analyzing graphs. % Too verbose
% Active voice for clarity
We conducted experiments... % Clear
Experiments were conducted... % Passive, less clear
% Precise language
Our method achieves 95\% accuracy. % Specific
Our method works well. % Vague
% Consistent terminology
\newcommand{\ourmethod}{GraphNet} % Define once
We propose \ourmethod{}, a neural network...
\ourmethod{} processes graphs efficiently...
% Professional tone
The results demonstrate... % Professional
The results clearly prove that we were right... % Unprofessional
Figures and Tables
% Professional figure presentation
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\columnwidth]{results-plot}
\caption{Performance comparison of different methods. Our approach (red) consistently outperforms baselines across all datasets. Error bars indicate 95\% confidence intervals over 5 runs.}
\label{fig:results}
\end{figure}
% Subfigures for comparison
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.48\columnwidth}
\includegraphics[width=\textwidth]{method-a}
\caption{Baseline method}
\label{fig:baseline}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\columnwidth}
\includegraphics[width=\textwidth]{method-b}
\caption{Our method}
\label{fig:ourmethod}
\end{subfigure}
\caption{Visual comparison of processing pipelines. (a) shows the traditional approach while (b) illustrates our streamlined method.}
\label{fig:comparison}
\end{figure}
% Reference in text
As shown in \cref{fig:results}, our method achieves superior performance. The visual comparison in \cref{fig:comparison} highlights the efficiency gains.
Citations and References
Bibliography Management
% Well-formatted bibliography entries
@article{smith2023deep,
title={Deep Learning for Graph Analysis: A Comprehensive Survey},
author={Smith, John and Doe, Jane and Johnson, Alice},
journal={IEEE Transactions on Neural Networks and Learning Systems},
volume={34},
number={5},
pages={2145--2168},
year={2023},
publisher={IEEE},
doi={10.1109/TNNLS.2023.1234567}
}
@inproceedings{doe2022efficient,
title={Efficient Graph Neural Networks for Large-Scale Applications},
author={Doe, Jane and Smith, John},
booktitle={Proceedings of the 39th International Conference on Machine Learning},
pages={3421--3430},
year={2022},
organization={PMLR},
url={https://proceedings.mlr.press/v162/doe22a.html}
}
@book{johnson2021graph,
title={Graph Theory and Machine Learning},
author={Johnson, Alice and Brown, Bob},
year={2021},
publisher={MIT Press},
address={Cambridge, MA},
edition={2nd},
isbn={978-0-262-04567-8}
}
@misc{brown2023preprint,
title={Scalable Graph Processing with Neural Networks},
author={Brown, Bob and Wilson, Carol},
year={2023},
eprint={2301.12345},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@phdthesis{wilson2022thesis,
title={Advanced Methods in Graph Neural Networks},
author={Wilson, Carol},
year={2022},
school={Stanford University},
address={Stanford, CA},
type={{Ph.D.} dissertation}
}
Managing Citations
% Best practices for citations
% Group related work
\subsection{Graph Neural Networks}
Early work on graph neural networks \citep{early2018, another2018}
focused on simple architectures. Recent advances
\citep{smith2023deep, doe2022efficient} have dramatically
improved performance.
% Cite primary sources
% Bad: GNNs were introduced [survey paper]
% Good: GNNs were introduced by \citet{original2009}
% Balance citations
Our work builds on three main areas:
\begin{itemize}
\item Graph theory \citep{graph1, graph2, graph3}
\item Neural networks \citep{nn1, nn2, nn3}
\item Optimization \citep{opt1, opt2, opt3}
\end{itemize}
% Recent and relevant
% Aim for recent papers (last 5 years) unless citing foundational work
% Include relevant conference and journal papers
Equations and Theorems
Mathematical Content
% Professional mathematical presentation
% Numbered equations for important results
\begin{equation}
f(x) = \sum_{i=1}^{n} w_i \phi_i(x) + b
\label{eq:model}
\end{equation}
% Unnumbered for intermediate steps
\begin{equation*}
\frac{\partial f}{\partial w_i} = \phi_i(x)
\end{equation*}
% Multi-line equations
\begin{align}
\mathcal{L}(\theta) &= \frac{1}{N} \sum_{i=1}^{N} \ell(f_\theta(x_i), y_i) \label{eq:loss1}\\
&= \frac{1}{N} \sum_{i=1}^{N} (f_\theta(x_i) - y_i)^2 \label{eq:loss2}\\
&\quad + \lambda \|\theta\|_2^2 \label{eq:loss3}
\end{align}
% Equation arrays for cases
\begin{equation}
\text{ReLU}(x) = \begin{cases}
x & \text{if } x > 0 \\
0 & \text{otherwise}
\end{cases}
\label{eq:relu}
\end{equation}
% Inline math
The complexity is $\mathcal{O}(n \log n)$ where $n$ is the input size.
Journal Submission
Preparing for Submission
% Pre-submission checklist
% 1. Check journal requirements
% - Page limit
% - Format (single/double column)
% - Reference style
% - Figure resolution (usually 300 DPI)
% 2. Anonymous submission
\usepackage{xifthen}
\newboolean{anonymous}
\setboolean{anonymous}{true} % for review
\ifthenelse{\boolean{anonymous}}{
\author{Anonymous Authors}
\thanks{Details hidden for review}
}{
\author{Real Names}
\thanks{Actual affiliations}
}
% 3. Supplementary material
% Create separate PDF with:
% - Additional experiments
% - Detailed proofs
% - Extended results
% - Code listings
% 4. Cover letter template
% Dear Editor,
% We submit our manuscript titled "..." for consideration...
% The main contributions are:
% 1. ...
% 2. ...
% This work has not been published elsewhere...
Journal Templates
\documentclass[review]{elsarticle}
\usepackage{natbib}
\usepackage{graphicx}
\journal{Journal Name}
\begin{document}
\begin{frontmatter}
\title{Title}
\author[inst1]{First Author}
\author[inst2]{Second Author}
\address[inst1]{University One}
\address[inst2]{University Two}
\begin{abstract}
Abstract text...
\end{abstract}
\begin{keyword}
keyword1 \sep keyword2
\end{keyword}
\end{frontmatter}
\section{Introduction}
Main text...
\bibliography{refs}
\end{document}
\documentclass[review]{elsarticle}
\usepackage{natbib}
\usepackage{graphicx}
\journal{Journal Name}
\begin{document}
\begin{frontmatter}
\title{Title}
\author[inst1]{First Author}
\author[inst2]{Second Author}
\address[inst1]{University One}
\address[inst2]{University Two}
\begin{abstract}
Abstract text...
\end{abstract}
\begin{keyword}
keyword1 \sep keyword2
\end{keyword}
\end{frontmatter}
\section{Introduction}
Main text...
\bibliography{refs}
\end{document}
\documentclass[twocolumn]{svjour3}
\usepackage{graphicx}
\begin{document}
\title{Your Title}
\author{First Author \and Second Author}
\institute{F. Author \at University}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
Abstract text...
\keywords{First \and Second}
\end{abstract}
\section{Introduction}
\label{intro}
Your text...
\bibliography{refs}
\end{document}
\documentclass[sigconf]{acmart}
\usepackage{graphicx}
\title{Title}
\author{First Author}
\affiliation{%
\institution{University}
\city{City}
\country{Country}
}
\email{email@inst.edu}
\begin{abstract}
Abstract...
\end{abstract}
\keywords{keyword1, keyword2}
\maketitle
\section{Introduction}
Text...
\bibliographystyle{ACM-Reference-Format}
\bibliography{refs}
Responding to Reviews
Revision Management
% Track changes for revision
\usepackage{changes}
\definechangesauthor[name={Rev1}, color=blue]{R1}
\definechangesauthor[name={Rev2}, color=red]{R2}
\definechangesauthor[name={Rev3}, color=green]{R3}
% Address reviewer comments
\section{Introduction}
\added[id=R1]{We added this sentence to address Reviewer 1's concern about motivation.}
\deleted[id=R2]{This sentence was removed.}
\replaced[id=R2]{new text}{old text}
% Highlight changes
\usepackage{soul}
\newcommand{\revision}[1]{\hl{#1}}
% Alternative: latexdiff
% latexdiff original.tex revised.tex > diff.tex
Best Practices Summary
✅ Research paper checklist:
- Clear, descriptive title
- Structured abstract with all components
- Well-defined contributions
- Comprehensive literature review
- Reproducible methodology
- Objective results presentation
- Thoughtful discussion
- Strong conclusions
- Complete, formatted references
- Professional figures and tables
- Proofread thoroughly
- Check journal requirements
- Prepare supplementary materials
Complete Example
\documentclass[10pt, conference]{IEEEtran}
\usepackage{cite}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{hyperref}
\usepackage{cleveref}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
\title{Deep Graph Neural Networks for\\Large-Scale Network Analysis}
\author{\IEEEauthorblockN{Jane Doe\textsuperscript{1}, John Smith\textsuperscript{2}, Alice Johnson\textsuperscript{1}}
\IEEEauthorblockA{\textsuperscript{1}Department of Computer Science, University Name\\
\textsuperscript{2}AI Research Lab, Tech Company\\
\{jdoe, ajohnson\}@university.edu, jsmith@company.com}}
\maketitle
\begin{abstract}
Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data. However, scaling GNNs to large networks remains challenging due to computational and memory constraints. In this paper, we propose ScaleGNN, a novel architecture that efficiently processes graphs with millions of nodes. Our key contributions are: (1) a hierarchical sampling strategy that preserves graph structure while reducing computational cost, (2) an adaptive aggregation mechanism that dynamically adjusts to local graph topology, and (3) a distributed training framework that enables processing of web-scale graphs. Extensive experiments on five large-scale datasets demonstrate that ScaleGNN achieves state-of-the-art performance while reducing training time by 73\% compared to existing methods. Our code is available at \url{https://github.com/example/scalegnn}.
\end{abstract}
\begin{IEEEkeywords}
graph neural networks, large-scale graphs, distributed learning, network analysis
\end{IEEEkeywords}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{G}{raph}-structured data is ubiquitous in modern applications, from social networks and recommendation systems to biological networks and knowledge graphs. The ability to effectively analyze these complex structures has become crucial for many domains. Traditional machine learning approaches struggle with graph data due to its irregular structure and complex dependencies.
Graph Neural Networks (GNNs) \cite{kipf2017semi} have revolutionized graph analysis by providing a principled framework for learning on graph-structured data. However, real-world graphs often contain millions or billions of nodes, presenting significant scalability challenges for existing GNN architectures.
\subsection{Motivation}
Consider a social network with a billion users, where we want to predict user interests for personalized recommendations. Existing GNN methods either:
\begin{itemize}
\item Require the entire graph to fit in memory, which is infeasible
\item Use sampling techniques that lose important structural information
\item Sacrifice model expressiveness for computational efficiency
\end{itemize}
These limitations motivate our work on ScaleGNN, which addresses all three challenges simultaneously.
\subsection{Contributions}
Our main contributions are:
\begin{enumerate}
\item \textbf{Hierarchical Importance Sampling}: We propose a novel sampling strategy that maintains critical graph structures while reducing the computational graph size by orders of magnitude.
\item \textbf{Adaptive Aggregation}: Our dynamic aggregation mechanism adjusts to local graph topology, allocating more capacity to complex neighborhoods.
\item \textbf{Distributed Framework}: We design a distributed training system that efficiently partitions and processes web-scale graphs across multiple machines.
\item \textbf{Comprehensive Evaluation}: We conduct extensive experiments showing that ScaleGNN outperforms state-of-the-art methods on five large-scale benchmarks while significantly reducing computational requirements.
\end{enumerate}
\section{Related Work}
\label{sec:related}
\subsection{Graph Neural Networks}
The concept of neural networks on graphs was first introduced by \cite{gori2005new}. Modern GNNs can be broadly categorized into spectral approaches \cite{bruna2014spectral, defferrard2016convolutional} and spatial approaches \cite{hamilton2017inductive, velivckovic2018graph}.
\subsection{Scalable GNN Training}
Recent work has focused on scaling GNNs through various techniques:
\textbf{Sampling-based methods}: GraphSAINT \cite{zeng2019graphsaint} uses subgraph sampling, while FastGCN \cite{chen2018fastgcn} employs layer-wise sampling. However, these methods often suffer from variance issues.
\textbf{Simplified architectures}: SGC \cite{wu2019simplifying} removes nonlinearities between layers, achieving linear complexity but with reduced expressiveness.
Our work differs by maintaining model expressiveness while achieving superior scalability through hierarchical sampling and distributed processing.
\section{Methodology}
\label{sec:method}
\subsection{Problem Formulation}
Let $G = (V, E, X)$ denote a graph with nodes $V$, edges $E$, and node features $X \in \mathbb{R}^{|V| \times d}$. Our goal is to learn node representations $Z \in \mathbb{R}^{|V| \times d'}$ that capture both local structure and global context.
\subsection{ScaleGNN Architecture}
The core innovation of ScaleGNN lies in its three-component design:
\begin{equation}
Z = \text{Distributed}(\text{Adaptive}(\text{HierSample}(G, X)))
\label{eq:scalegnn}
\end{equation}
\subsubsection{Hierarchical Importance Sampling}
We construct a hierarchy of graph abstractions:
\begin{equation}
G_0 \rightarrow G_1 \rightarrow \ldots \rightarrow G_L
\label{eq:hierarchy}
\end{equation}
where $G_l = (V_l, E_l)$ and $|V_{l+1}| < |V_l|$.
\begin{algorithm}
\caption{Hierarchical Importance Sampling}
\label{alg:sampling}
\begin{algorithmic}[1]
\REQUIRE Graph $G = (V, E)$, importance scores $s$
\ENSURE Hierarchy $\{G_0, G_1, \ldots, G_L\}$
\STATE $G_0 \leftarrow G$
\FOR{$l = 0$ to $L-1$}
\STATE $s_l \leftarrow$ ComputeImportance($G_l$)
\STATE $V_{l+1} \leftarrow$ SelectTopK($V_l, s_l, k_l$)
\STATE $E_{l+1} \leftarrow$ InducedEdges($V_{l+1}, E_l$)
\STATE $G_{l+1} \leftarrow (V_{l+1}, E_{l+1})$
\ENDFOR
\RETURN $\{G_0, G_1, \ldots, G_L\}$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
\subsection{Datasets}
We evaluate on five large-scale datasets:
\begin{table}[t]
\centering
\caption{Dataset Statistics}
\label{tab:datasets}
\begin{tabular}{lrrr}
\toprule
Dataset & Nodes & Edges & Classes \\
\midrule
ogbn-products & 2.4M & 61.9M & 47 \\
ogbn-papers100M & 111.1M & 1.6B & 172 \\
Reddit & 232.9K & 11.6M & 41 \\
Yelp & 716.8K & 6.9M & 100 \\
Amazon & 1.6M & 132.2M & 107 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Results}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{scalability-plot}
\caption{Training time comparison on ogbn-papers100M dataset. ScaleGNN achieves 73\% reduction in training time while maintaining accuracy.}
\label{fig:scalability}
\end{figure}
\begin{table}[t]
\centering
\caption{Node Classification Accuracy (\%)}
\label{tab:accuracy}
\begin{tabular}{lccccc}
\toprule
Method & Products & Papers & Reddit & Yelp & Amazon \\
\midrule
GraphSAGE & 78.5 & OOM & 95.4 & 63.2 & 82.1 \\
FastGCN & 76.2 & OOM & 93.7 & 61.8 & 79.4 \\
GraphSAINT & 79.1 & 65.3 & 96.2 & 64.5 & 83.6 \\
ClusterGCN & 78.9 & 67.1 & 96.6 & 64.9 & 84.2 \\
\midrule
\textbf{ScaleGNN} & \textbf{81.4} & \textbf{71.2} & \textbf{97.1} & \textbf{66.3} & \textbf{85.8} \\
\bottomrule
\end{tabular}
\end{table}
As shown in \cref{tab:accuracy}, ScaleGNN consistently outperforms baselines across all datasets. Notably, it is the only method that successfully processes the ogbn-papers100M dataset without running out of memory (OOM).
\section{Discussion}
\label{sec:discussion}
\subsection{Ablation Study}
We analyze the contribution of each component:
\begin{table}[t]
\centering
\caption{Ablation Study on Reddit Dataset}
\label{tab:ablation}
\begin{tabular}{lcc}
\toprule
Configuration & Accuracy & Time (min) \\
\midrule
Full ScaleGNN & 97.1 & 12.3 \\
w/o Hierarchical Sampling & 95.8 & 28.7 \\
w/o Adaptive Aggregation & 96.2 & 14.1 \\
w/o Distributed Training & 97.0 & 45.6 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Limitations}
While ScaleGNN achieves impressive results, it has some limitations:
\begin{itemize}
\item The hierarchical sampling may lose fine-grained local patterns in extremely sparse graphs
\item The distributed framework requires careful hyperparameter tuning for optimal partitioning
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
We presented ScaleGNN, a novel architecture for processing large-scale graphs. Through hierarchical importance sampling, adaptive aggregation, and distributed training, ScaleGNN achieves state-of-the-art performance while significantly reducing computational requirements. Our extensive experiments demonstrate its effectiveness across diverse datasets.
Future work includes extending ScaleGNN to dynamic graphs and exploring its application to even larger networks with trillions of edges.
\section*{Acknowledgment}
We thank the anonymous reviewers for their valuable feedback. This work was supported by NSF Grant \#1234567.
\bibliographystyle{IEEEtran}
\bibliography{references}
\appendix
\section{Implementation Details}
\label{app:implementation}
Our implementation uses PyTorch Geometric and PyTorch Distributed. The complete training pipeline...
\end{document}
Next Steps
Continue with academic writing:
Creating Posters
Conference poster design
Presentations
Academic presentations
Thesis Writing
Dissertation and thesis
Book Publishing
Academic book creation
Pro tip: Start writing your paper early and iterate frequently. Use version control to track changes, and always keep your bibliography updated as you write. Consider using reference management software like Zotero or Mendeley that can export to BibTeX format.
Was this page helpful?
- Research Paper Overview
- Standard Paper Structure
- Planning Your Paper
- Document Setup
- Basic Research Paper Template
- Writing Best Practices
- Academic Writing Style
- Figures and Tables
- Citations and References
- Bibliography Management
- Managing Citations
- Equations and Theorems
- Mathematical Content
- Journal Submission
- Preparing for Submission
- Journal Templates
- Responding to Reviews
- Revision Management
- Best Practices Summary
- Complete Example
- Next Steps