4

I've recently seen in the second edition of Reinforcement Learning: an introduction by Sutton and Barto an appealing way to display pseudo-algorithms. In the following an example image.

algo

I think that the environment is done with tcolorbox package and I think that I should be able to make something similar. However, I like to put my pseudo-algorithm in an algorithm environment through algorithm2e package. There is a way to mix the two things? The ideal case would be to have the background color, the box, and the captions the same as the image and the internal structure of the algorithm environment of algorithm2e.

This is the code of what I've done so far:

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[ruled,longend]{algorithm2e}
\usepackage{textgreek}
\usepackage{amssymb}

\begin{document}
\begin{algorithm}
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\caption{Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$}
\end{algorithm}
\end{document}  

This is the result:

algo2

Basically, it's only the algorithm part. I don't know how to start to change its appearance since in the documentation there are very few commands to change the visual effect and none of them seems helpful.

2
  • Obviously, it is done using tcolorbox. Commented Nov 23, 2018 at 19:14
  • @Bernard. I don't have the source, for this reason I only stated that I think that it's done with tcolorbox. Commented Nov 23, 2018 at 19:17

2 Answers 2

7

I missed the part in the algorithm2e manual when it is stated that the option H makes the environment non-floatable, thus it could be put inside the tcolorbox environment.

This is the updated code:

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[longend]{algorithm2e}
\usepackage{textgreek}
\usepackage{amssymb}
\usepackage{tcolorbox}

\begin{document}
\begin{tcolorbox}[fonttitle=\bfseries, title=Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$]
\begin{algorithm}[H]
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\end{algorithm}
\end{tcolorbox}
\end{document}

This is the visual result:

algo3


Usually, I don't post self-answered questions. This was a genuine question but I found the answer five minutes after I posted it. Since it could be helpful to others I will leave it here.

2
  • You may prefer to change the title accordingly to reflect the integration of tcolorbox into algoritm2e. Commented Nov 23, 2018 at 19:41
  • 1
    @Diaa. Done. Feel free to change it again if you have of a better title. Commented Nov 23, 2018 at 19:57
2

This is more a long comment that an answer which was already provided by gvgramazio.

I just propose to integrate tcolorbox+algoritm environments inside a new myalgorithm environment. Using before upper and after upper options, the algorithm environment can be easily integrated inside a tcolorbox. And the new environment includes title as as second parameter. The first parameter is optional and can help to customize the box:

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[longend]{algorithm2e}
\usepackage{textgreek}
\usepackage{amssymb}
\usepackage{tcolorbox}

\newtcolorbox{myalgorithm}[2][]{%
    fonttitle=\bfseries,
    title=#2,
    #1,
    before upper={\begin{algorithm}[H]},
    after upper={\end{algorithm}}
    }

\begin{document}
\begin{myalgorithm}{Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$}
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\end{myalgorithm}

\begin{myalgorithm}[sharp corners, colback=cyan!15]{Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$}
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\end{myalgorithm}
\end{document}

enter image description here

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.