Recent developments in perturbative QCD leading to the beta function in five-loop approximation are presented. In a first step, the two most important decay modes of the Higgs boson are discussed: decays into a pair of gluons and, alternatively, decays into a bottom–antibottom quark pair. Subsequently, the quark mass anomalous dimension is presented which is important for predicting the value of the bottom-quark mass at high scales and, consequently, the Higgs boson decay rate into a pair of massive quarks, in particular into \(b\bar b\). In the next section, the \(\alpha _{\rm s}^4\) corrections to the vector and axial-vector correlators are discussed. These are the essential ingredients for the evaluation of the QCD corrections to the cross section for electron–positron annihilation into hadrons at low and at high energies, to the hadronic decay rate of the \(\tau \) lepton and for the \(Z\)-boson decay rate into hadrons. Finally, we present the prediction for the QCD beta function in five-loop approximation, discuss the analytic structure of the result and compare with experiment at low and at high energies.

The two-Higgs doublet model provides a simple and attractive extension of the Standard Model. It provides a possibility to explain the large deviation between theory and experiment in the muon \(g-2\) in an interesting parameter region: light pseudoscalar Higgs \(A\), large Yukawa coupling to \(\tau \)-leptons, and general, non-type II Yukawa couplings are preferred. This parameter region is explored, experimental limits on the relevant Yukawa couplings are obtained, and the maximum possible contributions to the muon \(g-2\) are discussed.

We present the results for the heavy-quark form factors at two-loop order in perturbative QCD for different currents, namely vector, axial-vector, scalar and pseudo-scalar currents, up to second order in the dimensional regularization parameter. We outline the necessary computational details, ultraviolet renormalization and corresponding universal infrared structure.

Weak radiative \(B\)-meson decays are known to provide strong bounds on the charged Higgs boson mass in the Two-Higgs-Doublet Model. In the so-called Model-II, the 95% C.L. lower bound on \(M_{H^\pm }\) is now in the \(570\)–\(800\) GeV range, depending quite sensitively on the method applied for its determination. Here, we present and discuss the updated bounds.

We give a brief overview of the recent progress in the computation of QED radiative corrections for various processes involving bound states. Precision measurements of bound state properties, such as the Lamb shift of hydrogen atom and \(g\)-factor of a bound electron, and searches for rare transitions, such as \(B_s\to \mu ^+ \mu ^-\) or muon–electron coherent conversion, allow for a precise tests of the Standard Model. A comparison between the theory and the experiment cannot be done without the knowledge of the higher order effects, which sometimes receive unexpected enhancement factors.

The Belle II experiment with the integrated luminosity of 50 ab\(^{-1}\) will allow for an access to information never available before. In this paper, we concentrate on studies of the \(\chi _{c_i}\)–\(\gamma ^*\)–\(\gamma ^*\) form factors using the Monte Carlo event generator EKHARA. The precise experimental knowledge of the form factors can differentiate between various models giving predictions for the electronic widths \({\mit \Gamma }(\chi _{c_{1,2}} \to e^+e^-)\), even without direct measurements of these widths.

The implementation of the newly developed two-photon transition form factors for pseudoscalar mesons in the PHOKHARA and the EKHARA generators is discussed. The forthcoming developments in the PHOKHARA generator are shortly reviewed.

Last two decades have shown big progress in understanding of neutrino physics. Such phenomena as oscillations have been proved to exist by solar and atmospheric neutrino experiments. Later, some of those results have been confirmed by the accelerator experiments which use the muon neutrino beam produced from interaction of protons on the target. Last years gave us exciting results of observation of appearance of electron neutrinos as well as first measurements with anti-neutrino beam. This publication presents overview of current oscillation results and future plans as well for accelerator experiments as for solar neutrino projects or reactor experiments.

A dilation procedure is presented for the interval neutrino mixing matrix in order to explore possible unitary extensions of the three-dimensional neutrino mixings. Limits on light–heavy neutrino mixings are considered.

This article presents a short review of the single pion production (SPP) in the neutrino–nucleon scattering. The attention is focused on the discussion of the main difficulties in modeling the SPP processes. New physical observables, which may constrain the theoretical models, are proposed.

Liquid Argon Time Projection Chambers (LAr-TPCs) is an exciting class of detectors designed for registration of very rare events, like neutrino interactions or nucleon decay. They offer good detection efficiency, excellent background rejection, bubble chamber quality images, very good particle identification and calorimetric reconstruction of particle’s deposited energy. These capabilities made LAr-TPCs a very promising choice for neutrino physics experiments. In this paper, an overview of LAr-TPC ICARUS T600 detector and its achievements are presented.

Adding a single gauge singlet fermion and a second Higgs doublet to the original Standard Model allows an explanation for the observed smallness of the neutrino masses using the seesaw mechanism. This model predicts two neutral fermions with vanishing mass. The one-loop contribution to the neutral fermion masses due to the second Higgs doublet lifts this degeneracy and allows to fit the model parameters to the observed neutrino mass differences. We determine the values of the additional Yukawa couplings by requiring the correct prediction of the mass differences and mixings in the neutrino sector. We also discuss the ambiguities of the model parameters.

We present the complex mass renormalization scheme for mixed Majorana fermions using the Weyl spinor notation. Showing the expressions for field and mass renormalization constants, we discuss the differences to the on-shell renormalization scheme. Working in a seesaw extended two-Higgs doublet model, we apply the complex mass scheme for neutrino masses and mixings.

Precision studies of the properties of the top quark represent a cornerstone of the LHC physics program. In this contribution, we focus on the production of \(t\bar {t}\) pairs in association with one hard jet and, in particular, on its connection with precision measurements of the top-quark mass at the LHC. We report a summary of a full calculation of the process \(pp \to e^+\nu _e\mu ^-\bar {\nu }_\mu b \bar {b}j\) at NLO QCD accuracy, which describes \(t\bar {t}j\) production with leptonic decays beyond the Narrow Width Approximation (NWA), and discuss the impact of the off-shell effects through comparisons with NWA. Finally, we explore the sensitivity of \(t\bar {t}j\) in the context of top-quark mass extraction with the template method, considering two benchmark observables as the case studies.

A current version of the multipurpose program carlomat offers a possibility of taking into account either the initial- or final-state radiation separately, or both at a time. It allows to include the electromagnetic charged pion form factor in processes with charged pion pairs and to perform the U(1) gauge invariance tests in an easy way. In this paper, I will illustrate how those new capabilities of the program can be utilized in the description of the electron–positron annihilation to hadrons at low energies.

We report progress on a new approach to calculate top-pair production cross sections at NNLO. This consists in combining the slicing method with the soft collinear effective theory. The necessary matrix elements already exist in the literature except for the soft function at NNLO. We describe a strategy to evaluate this function numerically, and make a robust validation against the renormalisation group and our analytic results.

We show that by changing the upper phase space limit in calculation of an evolution kernel, one can change its functional form. This happens already at the NLO level, e.g. when the upper phase space limit is defined in terms of maximum of transverse momenta. The upper phase space limit of the evolution kernel corresponds to the evolution variable used in a Parton Shower, and this dependence means that the different kernels need to be used depending on the ordering of the Parton Shower.

At the future high luminosity electron–positron collider FCC-ee proposed for CERN, the precise measurement of the charge asymmetry in the process \(e^-e^+\to \mu ^-\mu ^+\) near the \(Z\) resonance is of special interest. In particular, such a measurement at \(M_Z\pm 3.5\) GeV may provide a very precise measurement of the electromagnetic coupling at the scale \(\sim M_Z\), a fundamental constant of the Standard Model. However, this charge asymmetry is plagued by a large trivial contribution from the interference of photon emission from the initial-state electrons and final-state muons. We address the question whether this interference can be reliably calculated and subtracted with the help of a resummed QED calculation.

We present a general form of the three-nucleon scattering amplitude. Our result is an operator form in which the scattering amplitude is written as a linear combination of scalar functions and operators acting on spin states. Using this form greatly reduces the numerical complexity of the so-called, three dimensional treatment of the Faddeev equations and can potentially lead to more accurate calculations of scattering observables at higher energies.

The NA61/SHINE experiment studies hadron production in hadron–hadron, hadron–nucleus and nucleus–nucleus collisions. The physics program includes strong interaction studies, measurements for neutrino physics experiments and measurements for cosmic ray experiments. Future plans are to extend the program by new measurements needed to understand the onset of deconfiment like open charm production in nucleus+nucleus collisions as well as by studies of fragmentation cross sections required to interpret new AMS-II data. This new program can be realized only by 2020 and requires upgrades to the present NA61/SHINE detector setup.

In this paper, we review the most recent developments of the four-dimensional unsubstraction (FDU) and loop-tree duality (LTD) methods. In particular, we make emphasis on the advantages of the LTD formalism regarding asymptotic expansions of loop integrands.

Representations are derived for the basic scalar one-loop vertex Feynman integrals as meromorphic functions of the space-time dimension \(d\) in terms of (generalized) hypergeometric functions \(_2F_1\) and \(F_1\). Values at asymptotic or exceptional kinematic points as well as expansions around the singular points at \(d=4+2n\), \(n\) being non-negative integers, may be derived from the representations easily. The Feynman integrals studied here may be used as building blocks for the calculation of one-loop and higher-loop scalar and tensor amplitudes. From the recursion relation presented, higher \(n\)-point functions may be obtained in a straightforward manner.

Precision measurements of properties of the electroweak \(W\)- and \(Z\)-bosons provide strong constraints on the Standard Model (SM) and extensions thereof. This sensitivity crucially depends on the availability of accurate theoretical predictions for these quantities, including higher-order radiative corrections. This contribution gives a brief overview of available calculations and compares the estimate of theory uncertainties with the projected experimental precision of future \(e^+e^-\) collider proposals. Then it is shown how numerical Mellin–Barnes integrals can be used to evaluate higher-order loop integrals that depend on three or more independent mass and momentum scales. As a concrete physical application, the complete two-loop corrections to \(Z \to b\bar {b}\) are considered.

We present results for the gluon field anomalous dimension in perturbative QCD and derive the corresponding Beta function at five-loop order. All given results are valid for a general gauge group.

The possibility of discriminating quark and gluon jets is important for searches for BSM physics, where signals of interest are often dominated by quarks, while the corresponding backgrounds are dominated by gluons. Working in the idealized context of electron–positron collisions, where one can unambiguously define quark and gluon jets, we find an interesting interplay between perturbative parton shower effects and nonperturbative colour reconnection effects. These results triggered new developments in the simulation of quark and gluon jets in parton-shower generator Herwig 7 which are presented at the end of this note.

We report on the recent calculation of the next-to-leading order, super-QCD corrections to the squark–(anti)squark pair production in the Minimal \(R\)-symmetric Supersymmetric Standard Model. The emphasis is put on highlighting differences compared to the Minimal Supersymmetric Standard Model. Phenomenological consequences for the LHC are also briefly discussed.

In this paper, we discuss some aspects of the analytical calculation of energy correlations in electron–positron annihilation at a next-to-leading order in QCD. Our primary focus is on the most difficult task: the calculation of master integrals for real-emission contributions, which are functions of two dimensionless variables and the dimensional regulator. We use a method of differential equations and their so-called epsilon form which is constructed with the help of the Fuchsia program based on Lee’s algorithm.

Asymmetric nuclear matter is studied within the relativistic mean field approach. Models with the \(\omega \)–\(\rho \) and \(\sigma \)–\(\rho \) cross-interactions, through their remarkable ability to modify the density dependence of the symmetry energy, have been used to analyse the saturation properties of asymmetric nuclear matter.

We further develop recently proposed cosmological model based on exotic smoothness structures in dimension 4 and Boolean-valued models of Zermelo–Fraenkel set theory. The approach indicates quantum origins of large-scale smoothness and justifies the dimension 4 as the unique dimension for a spacetime. Of particular importance is the hyperbolic geometry of exotic \(R^4\) submanifolds of codimensions 1 and 0. It is argued that the global 4-dimensional manifold representing the Universe beyond the present observational scope is the direct sum of complex surfaces K\(3\# \overline {\mathrm {CP}(2)}\).

We discuss various properties of the generic two-Higgs-doublet extension of the Standard Model focusing on the region of parameter space known as the alignment limit. We emphasize that in the alignment limit in order to retain a possibility of CP violation in the scalar potential, one has to relax the traditional \(Z\) symmetry introduced to prevent flavour changing neutral currents at the tree level in Yukawa couplings. We point out various correlations between properties of non-standard Higgs bosons \(H_2\) and \(H_3\) present in the model and suggest measurements at the LHC that can test the alignment scenario. Spontaneous CP violation in the 2HDM is also discussed in the alignment limit.

We consider the possibility that a single scalar extension of the Standard Model can be used to account for the presence of dark matter. We consider such an extension where the dark sector has a global U(1) symmetry, in which case dark matter can exhibit Bose–Einstein condensation, even when relativistic. We show that a condensate indeed forms at sufficiently early times for all masses, but that consistency and observational constraints imply that the condensate persists at present only for masses in the \(10^{-12}\) eV region. We also briefly discuss constraints derived from relic abundance and direct detection limits.

Inspired by the cosmological small-scale structure problems, we thoroughly study a self-interacting vector dark matter (VDM) model in which the VDM is generated by the freeze-in mechanism via the Higgs portal interaction. The strong VDM self-interactions naturally arise when the dark Higgs boson which induces the VDM mass is much lighter than the VDM. We also carefully consider the constraints from the VDM indirect searches, which restrict the dark Higgs mass to be at most of \({\cal O}\) (keV).

We present a renormalizable vector-fermion dark matter model, where two or three components of the dark sector are stable and hence constitute the observed dark matter relic density. In particular, our model involves an extension of the Standard Model by a dark U\((1)_X\) gauge symmetry which includes a dark vector \(X_\mu \), and two Majorana fermions, \(\psi _+\) and \(\psi _-\). Moreover, we employ the Higgs mechanism in the dark sector to give masses to dark particles; it also provides a second Higgs, \(h_2\). Depending on the masses of these three dark sector particles (\(X_\mu ,\psi _\pm \)), two or three of them contribute to the dark matter. We have numerically solved a set of coupled Boltzmann equations describing the evolution of density for different DM components.

We revisit scenario of dark matter (DM) annihilation through an \(s\)-chan-nel resonance. The evolution of DM density and temperature is studied by solving a set of coupled Boltzmann equations. We show that the kinetic decoupling is a prolonged process that can happen when DM annihilation is still active in changing DM density. We scan over the parameter space in the resonance region of a vector dark matter model and find that the effects of the early kinetic decoupling can modify DM relic density by up to a factor of two in the area where experimental constraints are satisfied.