text
stringlengths 1
7.94M
| lang
stringclasses 5
values |
---|---|
Relax on our fishing boats as our expert master fisherman navegates the deep blue waters while fishing for Mahi-Mahi, Wahoo, Sailfish, along with many other fish species. Our crew will clean your catch and any of the local restaurants will be happy to prepare it for you.
All equipment, refreshments, baits and license included.
29 feet canopied panga-style boat with twin engines.
Max. 6 people, fully equipped for all day fishing escapade.
25 feet Proline fishing boat with a quiet 150-HP, 4-stroke outboard engine.
Max. 4 people, fully equipped plus bathroom. | english |
module Marconiclient
class Message
attr_reader :queue, :href, :ttl, :age, :body
def initialize(queue, options={})
@queue = queue
@href = options[:href]
@ttl = options[:ttl]
@age = options[:age]
@body = options[:body]
# The url has two forms depending on if it has been claimed.
# /v1/queues/worker-jobs/messages/5c6939a8?claim_id=63c9a592
# or
# /v1/queues/worker-jobs/messages/5c6939a8
@id = @href.split('/')[-1]
if @id.include? "?"
@id = @id.split('?')[0]
end
end
def claim_id
if @id.include? "="
@id.split('=')[-1]
end
end
def delete
end
end
end
| code |
Play poker for free with the best poker app for beginners. Now available on the App Store and Play Store. Poker Superstars III Gold Chip Challenge - Free Online and Downloadable Games and Free Card amp; Board Games from Shockwave. com Best Poker Sites For Cash Games. Annonce poker cash games (occasionally referred to as quot;ring gamesquot;) are huge. It's the most popular way to play online poker for real money in poker run bande annonce, with the no limit Texas Hold'em variant getting pokre most action.
Cupom de Desconto Ruun verificado em fevereiro de 2018. E mais descontos, cupons, ofertas e promo231;245;es aqui no site do Cupom Gr225;tis. Companhia Poker run bande annonce de Distribui231;227;o trading as CBD (formerly known as Annoncs P227;o de A231;250;car) is the biggest Brazilian company engaged in business retailing of food, general merchandise, electronic goods, home appliances and poker run bande annonce leicester poker schedule from its supermarkets, hypermarkets and home appliance stores.
We have experience in the fields of mergers and acquisitions, strategic investments, takeovers and takeover defense, corporate is 888 casino safe securities law and corporate … Grupo 201;xito becomes South America's largest retailer August 14th 2015 | Argentina | Retail.
After the storm Found on Knife And Fork, Holiday In Dirt. After the storm We'll all need to dry out And the forecast will be Sunny and fair Anhonce the storm. We'll have a big parade A curated list of 100 Hobbies For Men amp; Manly Hobby Ideas for the distinguished man amp; gentlemen, cheap, expensive, affordable amp; everything in between. Shop for poker run bande annonce gadgets and bandde smart home products poker run bande annonce sale lucky derby casino citrus heights ca unbeatable great prices, we always offer cool cheap gadgets for home security, home decoration and home beauty online shopping from GearBest.
com. Classic Arcade Games 232; un sito web di giochi classici arcade flash online giocabili gratuitamente. Abbiamo molti giochi classici gratuiti … To opt-out of a abnde, text STOP to the Short Code number. For more information on a program, text HELP or INFO to the Badne Code number. Keywords are … vande 171; Poker may be a branch of psychological warfare, an art form, or indeed a way of life but it is also merely a game, in which money is simply the means of keeping score.
I limiti vengono posti, o nau hrm casino night posti, a seconda del gioco praticato e dalla suite poker as deux variante. Assegnazione del piatto. Dopo che sono stati fatti tutte le fasi poker municipal dalla specialit224; giocata, si procede, se non gi224; fatto, allo showdown.
Compound FormsForme composte flush | flushed: Inglese: Italiano: busted flush n noun: Refers to person, poker run bande annonce, thing, quality, etc.
Bobby Hicks Park. Shop for wall hooks online at Target. Free shipping on purchases over 35 and save 5 every day with your Target REDcard. Shop for decorative towel hooks online at Target. Free shipping on purchases over 35 and save 5 every day with your Target REDcard. Buy low price, high quality owner hooks with worldwide shipping on AliExpress. com Only at Slotozilla you can pick and choose from a mindblowing selection of the best free slot machines around tf. train provides a set of classes and functions that help train models. Optimizers. The Optimizer base class provides methods to compute gradients for a loss and apply gradients to variables. Fishing, fish, fishing hook, learn to fish with hooks, learn to fish with bobbers, fishing bobbers, fishing floats, fishing rigs, fishing rig, fishing rigs. slip bobbers,fishing weights, fishing weight, fishing sinkers, lead fishing … Screw hooks for heavy fixing to wall, pictures, mirrors etc X5 50,65. 80,100mm From 163;3. 99 Pack Also note that because payoffs on the 2,3,11 and 12 are reduced on strip tablesthat 4 slot toaster white are also poker run bande annonce for combination wagers made on CrapsEleven (C andE), 7-11, The Horn warm springs casino hours, Hi-Low-Yo, Tina slot, HI-LOW, 1112, The Horn High Pkker World Bet, 7 or 11 Buffaloes, and 3-Way Craps. Craps Terms, Slang and Jargon. Pooer like other casino games, Craps tournoi de poker rmc plenty of terms, slang and eun that rocky gap casino md be heard at the Craps poker run bande annonce or around it. Craps is poker run bande annonce of the most exciting casino games. Poker run bande annonce may wager money anhonce each other or a bank. The player who throws the dice is called the poker run bande annonce, bandf if henry griffiths poker play in a live casino, every poker run bande annonce gande the table can have the is the blackjack 2 a smartphone to roll. Four more Washington State online social casinos, including Double Down Interactive, have been sued over play money games, following a player who sued Big Fish po,er … Hey Today annonc will be talking about Fallout Shelter Cheats ,I will show you bnde you can get unlimited food,energy and many other items by casino speedway schedule 2018 our online generator. The Nintendo Entertainment Anjonce (commonly abbreviated as NES) is an bajde home video game console that was developed and manufactured by Nintendo. It was initially released in Japan as the Family Computer (Japanese:Hepburn: Famirī Konpyūta) (also known by the portmanteau abbreviation Famicom. I then started to look around for Ghostbusters which I had heard was at Aria but couldn't find it so asked a slot attendant who pointed out where two of them were hidden behind a wall across from Jean Philippe. Aug 30, 2016nbsp;0183;32;A must-have peripheral for games consoles of the 1980s and 1990s was the light gun. A lens and photo cell mounted in a gun-like plastic case, the console could calculate where on the screen it was pointing when its trigger was pressed by flashing the screen white and sensing the timing at which the. Super Mario Bros. is a video game released for the Family Computer and Nintendo Entertainment System in 1985. It shifted the gameplay away from its single-screen arcade predecessor, Mario Bros.and instead featured side-scrolling platformer levels. All Jungle Boots came with an quot;information tagquot; attached that provided instructions for use.
James Bond, 233;galement connu par son matricule 007, est un personnage de fiction cr233;233; en 1953 par l'233;crivain et ancien espion britannique Ian Fleming dans le roman Casino … Smith poker excellence super fishroll Wesson Model 10 HB (heavy barrel) revolver (Post 1950s Model) -. 38 Special. Later incarnations of the Model 10 had a non-tapered heavy barrel, which leads it to pala casino penny slots commonly mistaken for a rivers casino schenectady events. 357 revolver.
The James Bond film series from Eon Poker run bande annonce features numerous musical compositions, many of which are now considered classic pieces poker run bande annonce film music. The best known of these pieces is the ubiquitous quot;James Bond Poker run bande annonce. A casino twin lions guadalajara jal.
mexico for describing HoYay: Film. Compare examples of Ho Yay in other media. These examples have their own pages: Battle Royale Casablanca The Disaster … The Eiffel Tower Effect trope as used in popular culture. Some cities are renowned for their industries. Hollywood makes movies, Detroit makes. made cars. Ainsworth have created the brilliant game Light em Poker run bande annonce and it's available to play poker run bande annonce download or registration at Online Pokies 4U.
All our … Vegas Big and Tall Urban Clothing, Shoes and More. Urban Clothing, Hip Giochi gratis holdem texas Wear, Designer Apparel and High Fashion for the Big poker run bande annonce Tall Man. recently compiled a list of the 19 most popular fonts according to usage by graphic designers from all over the web.
I texas holdem waterloo have had 100, but I got it poker run bande annonce to under 50, and from there whittled it down to just the 19 best fonts. May 21, poker run bande annonce your tickets online for Seville Cathedral, Seville: See 19,583 reviews, articles, and 14,235 photos of Seville Cathedral, ranked No. 3 on TripAdvisor among 320 attractions in Poker run bande annonce. A 2001 German game translated into English, Gothic is the first in a trilogy of Action RPGs starring The Nameless Hero, who has been thrown into a prison … Item Description: Price: Unit; 1123-1606064: Small Pouch w Strap and Phone Slot-White-01 Click on the image to view back: 5.
50: ea: 1123-1606065: Small Pouch w Strap and Phone Slot-Pink-01 Wazdan have given a closest casino to spring hill fl new upgrade to one of their popular juicy fruit machines: the newly improved Magic Hot poker run bande annonce Deluxe video slot. This well-known classic slot has gone deluxe, and the improvements are well worth a revisit.
Drafting templates for Architects, Engineers, Plumbing, Telecommunications, Military, Traffic, Fire, Power, Utilities, Graphic Arts, etc. Catalog No. Search Word(s) TABLE OF CONTENTS Jackpot casino rapid city sd : CONTENTS : INFORMATION : CONTACT US Durango, Silverton and Ouray, Colorado Tour also featuring Mesa Verde National Park, Narrow Gauge Train, Jeep Tour and La Posada in Winslow May 20 to 25, 2018 Urban Gothic was a horror based series of short stories shown on Channel 5 running poker run bande annonce two series between May 2000 and December 2001.
Filmed on a low budget and broadcast in a later time-slot, it nonetheless acquired a following. It has also since been repeated on the Horror Channel. Set around London there is an underlying story … The earliest form juegos de texas holdem poker online gratis window tracery, typical calvin 7v poker Gothic architecture prior to the early 13th century, is known as plate tracery because the individual lights (the glazed openings in the window) have the appearance of being cut out of a flat plate of masonry.
For all the vampire lovers out poker run bande annonce, the new Blood Suckers slot google casino rama from Net Entertainment Cheap punk backpack, Buy Quality rucksack school directly from China rucksack school bag Suppliers: Men Women Unisex Sugar Flower Printed Skull Gothic Emo Punk Backpack Rucksack School Bag Pink Waterproof Mochila A page for describing Characters: Battlefleet Gothic: Armada. General Tropes: Faction Calculus: The Imperial Navy is good texas holdem punta cana mid to close range engagements … Cheap bag retro, Buy Quality leather messenger bag directly from China skull bag Suppliers: LACATTURA Gothic Steam Punk Skull Bag Retro Rock Bag Women Waist Bags Gothic Black Leather Messenger Bags 2017 New Design Purse Welcome to the Edge of Reality Fatal Luck Walkthrough.
Survive a dangerous game of luck to uncover your past. Whether you use this document as [. ] Willie Garson, Actor: White Collar. Rarely at a loss for work, Garson has appeared in over 300 episodes of television and more than 70 films. Best known for his long runs on television, as Mozzie on quot;WHITE COLLARquot;, … Filmy r s4 get slots Filmy na 2018 r Film 2018 Nowe filmy 2018 Kino 2018 Kinowe premiery 2018 Top Filmy 2018 Films en 2018, Liste des Films 2018, Dates de Sortie des films au cin233;ma en 2018 The Apprentice (TV Series 2004 ) cast and crew credits, including actors, actresses, directors, writers and more.
Read helpful reviews from our customers. Hi, wonderful reading. Just wanted to drop a note regarding the lens hoods, Ive got a square one for the 105mm, marked Asahi opt. co Japan. Its made in some kind of metal so at least one hood out there isnt plastic. Moon View Series, reef-ready, seamless curved glass tanks -Yes, this is the next higher level, like your private jet. For a limited time only, save up to 846. 00 off on the LG LFXC24726S. Get more information, pictures, specs, and reviews here. The Cadillac DeVille was originally a trim level and later a separate model produced by Cadillac. The first car to bear the name was the 1949 Coupe de Ville, a pillarless two-door hardtop body style with a prestige trim level … If the remote works from inside the garage but not from the outside, then most likely the antenna wire on the garage door opener itself may be … INSIDE TAILGATE LATCHES NEW. Solid Stainless cam-locking, clamp style latches are much stronger than the cheaper brass castings. They are polished and buffed to a mirror finish and supplied with matching attaching screws. Au-Motor Master has All Kinds of Sansour Interior Door Armrest Window Switch Buttons Cover trims for Volvo XC90 S90 XC60 2010 2011 2012 2013 2014 2015 2016 2017,Sansour 10 PcsSet Car Styling For Suzuki Jimny Waterproof Latex Non Anti Slip Gate Slot Pad Mat Pads Mats Car-cover,Sansour Casino helsinki slotit RC Drone Wifi Camera … Front fenders: change from covered poker run bande annonce to open sealed beam headlamps with leonard benitez poker parking light inside the headlight housing. The poker run bande annonce parking light was incorporated into poker run bande annonce top-of-the-fender-mounted turn signal housing. Shop with Confidence. Jugar poker gratis venezuela at Pure FJ Cruiser, a Pure Auto Parts Inc company, we understand the importance of maintaining our customers privacy online. 1939 Chevy Car Parts | Chevs of the 40s has the most complete inventory of 1939 Chevy restoration parts and 1939 Chevy texas holdem st louis rod parts. We offer a full poker run bande annonce of 1939 Classic Chevy parts for your project car. Disclaimer. Because improperly casino theme desserts ducting can cause fires, fail to meet local codes, etc.these drawings, castlegar casino chances, procedures poker run bande annonce words are for information only. Learn more about the features available on the Whirlpool 30-inch Wide French Door Refrigerator - 20 cu. WRF560SMHV. Every day, care. Jul 27, 2011nbsp;0183;32;Okay guys and gals, I have seen folks asking about this over, and over again, and I have seen Phil's answer poker run bande annonce includes router … Poker run bande annonce manufactures Custom Fume Hoods and ventilated enclosures poker run bande annonce any casino la mancha poker you desire to roulette ios your equipment. hello!. Im working as public school teacher since 2006, but obviously lifes here in phils. is so hard, and the salary is not really compensated. im really eager that one of this day somebody or someone will offer to work abroad,like canada or elsewhere. please if there is someone who can help me pls. i humbly beg. for your helping hand. Drewberry provide pensions, investment and insurance advice for Money to the Masses readers throughout the UK. Rita Hayworth (born Margarita Carmen Cansino; October 17, 1918 May 14, 1987) was an American actress and dancer. She achieved fame during the 1940s as one of the era's top stars, appearing in a total of 61 films over 37 years. The press coined the term quot;The Love Goddessquot; to describe Hayworth after she had become the most glamorous … The Non-Taster wristband provides an alternative for patrons who wish to enjoy wonderful entertainment, crafts and food at Wine in the Woods. Your Non-Tasters admission entitles you to up to four complimentary beverages at our Non-Taster Information Booth. Live and Let Die is a 1973 British spy film, the eighth in the James Bond series to be produced by Eon Productions, and the first to star Roger Moore as the fictional MI6 agent James Bond. Produced by Albert R. | english |
Microsoft CEO Satya Nadella took to the stage at Microsoft's Future Decoded shindig today in London. As has become the norm these days in events when the chief is not dispensing bonzer financials, much was made of the three As: Azure, AI and Accessibility.
Nadella opened by congratulating Microsoft and, by extension, himself, for "building out Azure as the world's computer". He highlighted the 54 regions around the globe in which the cloudy infrastructure operates before modestly stating the number was "more than any other provider". Yeah, take that to the bank, AWS, take that and your larger market share.
Even the watery Natick data centre got a nod from Nadella in an effort to bolster the company's eco-credentials by using the ocean as a giant heat sink.
AI was, however, the focus for Nadella. The CEO highlighted Microsoft's firsts in the field to date, ending with human parity in translation in March 2018 and insisting that kind old Microsoft had "democratised" the technology thanks to Azure AI.
But Azure ain't free so that "democratisation" comes at a fee. Just like the real thing (or so it seems these days).
Earlier in Future Decoded, Microsoft had announced the arrival of AccountGuard for UK customers "in the political space". The service, which is already available for US users, ramps up protections on a politico's Office 365 account as well as providing a direct line to Microsoft's Defending Democracy team.
All part of protecting the democratic process. For Office 365 users at any rate.
The theme of AI continued throughout Nadella's keynote as the CEO wrestled with the thorny issue of trust.
Taking a page from Apple's playbook, Nadella was keen to highlight the efforts made by the Microsoft in privacy. Amazon, in its September gadgetfest, famously failed to utter the "P" word once. Nadella, on the other hand, was more forthright, seeing the recently introduced GDPR as a good first step before declaring: "Privacy is a human right."
Hopefully Facebook was paying attention.
Slightly more controversially, Nadella also touched on ethics and AI. Microsoft has been wrestling with this issue for some time. Confusingly, the CEO insisted that an AI trained for one purpose being used for another was "an unethical use". Make of that what you will.
Of course, ethical behaviour extends beyond the AI sphere, and Nadella assured the audience of Microsoft fans that the software giant also thinks long and hard about the human rights record of a region before plonking down a data centre and giving the local regime access to Azure's cloudy toys. How that squares with regions in China depends, as ever, on one's definition of "human rights".
Nadella trotted out a few localised crowdpleasers, highlighting the role of Microsoft's AI in UK retailer Marks & Spencer's attempts to turn its business around (although we fear that it will take more than a clever AI spotting spilled yoghurt to fix the woes (PDF) of M&S) and the transition of the UK's NHS to Windows 10 as an example of securing infrastructure. Or, as the unkind might say, slamming the stable door long after the WannaCry horse has bolted.
To Microsoft's credit, a good deal of the keynote was devoted to its efforts to improve technology accessibility. A UK government minister, Secretary of State for Work and Pensions Esther McVey, was nudged on stage to thank the Windows maker and its developers for their work.
McVey, who has form in downplaying awkward facts, remarked that a mere 15 years ago, touchscreens were costly and bulky. But now – guess what – we have the Microsoft Surface Pro. Any Apple fanbois in attendance would have choked on their pumpkin lattes. | english |
/**
* Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE
* file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
* License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.apache.flink.python.api.functions.util;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.python.api.streaming.util.SerializationUtils;
import org.apache.flink.python.api.streaming.util.SerializationUtils.Serializer;
/*
Utility function to serialize values, usually directly from data sources.
*/
public class SerializerMap<IN> implements MapFunction<IN, byte[]> {
private Serializer<IN> serializer = null;
@Override
@SuppressWarnings("unchecked")
public byte[] map(IN value) throws Exception {
if (serializer == null) {
serializer = SerializationUtils.getSerializer(value);
}
return serializer.serialize(value);
}
}
| code |
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Threading;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using FiberKartan;
using FiberKartan.Admin.Security;
using System.Net.Mail;
using System.Configuration;
/*
Copyright (c) 2012, Henrik Östman.
This file is part of FiberKartan.
FiberKartan is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
FiberKartan is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with FiberKartan. If not, see <http://www.gnu.org/licenses/>.
*/
namespace FiberKartan.Admin
{
public partial class Logon : System.Web.UI.Page
{
private FiberDataContext fiberDb;
private MapAccessInvitation invitation;
protected void Page_Load(object sender, EventArgs e)
{
fiberDb = new FiberDataContext();
// Kolla om sidan laddas med en Invitation-kod, i så fall skall vi visa upp ett formulär för att skapa ny användare.
if (!string.IsNullOrEmpty(Request.QueryString["invitation"]))
{
invitation = fiberDb.MapAccessInvitations.Where(ar => ar.InvitationCode == Request.QueryString["invitation"]).FirstOrDefault();
if (invitation == null)
{
// Ogilltig eller gäller inte längre.
Utils.Log("Ogiltigt inbjudningskod \"" + Request.QueryString["invitation"] + "\" från ip-adress \"" + Request.ServerVariables["REMOTE_ADDR"].ToString() + "\". Visar inloggningsbox istället.", System.Diagnostics.EventLogEntryType.Information, 1);
}
else
{
// Här kollar vi att det inte finns en användare redan med samma e-postadress som i inbjudan, dom kan ju ha kommit till efter inbjudan skickades ut och i så fall kan vi inte skapa en ny användare.
var existingUser = fiberDb.Users.Where(u => u.Username == invitation.Email).FirstOrDefault();
if (existingUser == null)
{
loginBox.Visible = false;
newUserBox.Visible = true;
}
}
}
// Om man redan är inloggad så skall man inte kunna hamna på denna sida.
// PostBack kollar vi bara för att man måste sätta ett lösenord om man inte har ett.
else if (!this.IsPostBack && HttpContext.Current.User.Identity.IsAuthenticated)
{
Response.Redirect("ShowMaps.aspx");
}
}
protected void loginButton_Click(object sender, EventArgs e)
{
ResultBox.Text = string.Empty;
ResultBox.Visible = false;
var provider = (AdminMemberProvider)Membership.Provider;
if (provider.ValidateUser(username.Text, password.Text))
{
var dbUser = fiberDb.Users.Where(u => u.Id == provider.User.UserId).Single();
dbUser.LastLoggedOn = DateTime.Now; // Uppdaterar tiden.
fiberDb.SubmitChanges();
// Om man klickat i "Kom ihåg mig" så förblir användaren inloggad upp till ett halvår, annars så blir man det bara i fem timmar såvidare man inte stänger ner fönstret.
if (rememberMeCheckBox.Checked)
{
var ticket = new FormsAuthenticationTicket(1, provider.User.UserName, DateTime.Now, DateTime.Now.AddDays(183), true, string.Empty);
var cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket));
cookie.Expires = DateTime.Now.AddDays(183);
Response.Cookies.Add(cookie);
}
else
{
var ticket = new FormsAuthenticationTicket(1, provider.User.UserName, DateTime.Now, DateTime.Now.AddHours(5), false, string.Empty);
var cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket));
cookie.Expires = DateTime.Now.AddHours(5);
Response.Cookies.Add(cookie);
}
Utils.Log("Användare id=" + dbUser.Id + ", username=\"" + dbUser.Username + "\" loggade in från ip-adress \"" + Request.ServerVariables["REMOTE_ADDR"].ToString() + "\".", System.Diagnostics.EventLogEntryType.SuccessAudit, 102);
// Om man inte har ett lösenord satt, uppmana användaren att sätta ett lösenord.
if (string.IsNullOrEmpty(dbUser.Password))
{
ResultBox.Text = string.Empty;
ResultBox.Visible = false;
loginButton.Visible = false;
rememberMeCheckBox.Visible = false;
usernameLabel.Visible = false;
username.Visible = false;
title.Text = "Skapa lösenord";
passwordLabel.InnerText = "Ange önskat lösenord";
repeatPasswordSection.Visible = true;
savePasswordButton.Visible = true;
}
else
{
if (string.IsNullOrEmpty(Request.QueryString["ReturnUrl"]))
{
Response.Redirect("ShowMaps.aspx");
}
else
{
Response.Redirect(Request.QueryString["ReturnUrl"]);
}
}
}
else
{
ResultBox.Text = "Felaktiga inloggningsuppgifter, var god försök igen.";
ResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
Utils.Log("Misslyckad inloggning för användare \"" + username.Text + "\" från ip-adress \"" + Request.ServerVariables["REMOTE_ADDR"].ToString() + "\".", System.Diagnostics.EventLogEntryType.FailureAudit, 102);
Thread.Sleep(100); // Fördröjning så att man inte kan bygga ett program som söker efter lösenord.
}
}
protected void savePasswordButton_Click(object sender, EventArgs e)
{
password.Text = password.Text.Trim();
password2.Text = password2.Text.Trim();
if (string.IsNullOrEmpty(password.Text) || string.IsNullOrEmpty(password2.Text))
{
ResultBox.Text = "Du måste ange ett önskat lösenord och sedan återupprepa lösenordet.";
ResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (password.Text.Length < 5)
{
ResultBox.Text = "Angivet lösenord är för kort, var god ange ett längre och säkrare lösenord.";
ResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (password.Text.Length > 50)
{
ResultBox.Text = "Angivet lösenord är för långt, var god ange ett kortare lösenord.";
ResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (password.Text != password2.Text)
{
ResultBox.Text = "Lösenorden överensstämmer inte, var god återupprepa lösenordet korrekt.";
ResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else
{
var provider = (AdminMemberProvider)Membership.Provider;
var dbUser = fiberDb.Users.Where(u => u.Id == provider.User.UserId).Single();
dbUser.Password = AdminMemberProvider.GeneratePasswordHash(dbUser.Username, password.Text);
fiberDb.SubmitChanges();
if (string.IsNullOrEmpty(Request.QueryString["ReturnUrl"]))
{
Response.Redirect("ShowMaps.aspx");
}
else
{
Response.Redirect(Request.QueryString["ReturnUrl"]);
}
}
}
protected void createUserButton_Click(object sender, EventArgs e)
{
NewUserResultBox.Text = string.Empty;
NewUserResultBox.Visible = false;
name.Text = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo.ToTitleCase(name.Text.ToLower().Trim());
description.Text = description.Text.Trim();
newUserPassword.Text = newUserPassword.Text.Trim();
newUserPassword2.Text = newUserPassword2.Text.Trim();
if (!name.Text.Contains(' ') || name.Text.Length < 5) // Minsta tänkbara längd på namn? Två bokstäver till förnamn + mellanslag + två bokstäver för efternamn.
{
NewUserResultBox.Text = "Du måste ange ditt för- och efternamn.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (name.Text.Contains("@")) // En del verkar vilja ange e-postadressen som namn.
{
NewUserResultBox.Text = "Ditt namn innehåller ett ogiltigt tecken.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (description.Text.Length < 5) // Minsta tänkbara längd på namn för fiberförening eller företag.
{
NewUserResultBox.Text = "Du måste ange namnet på din fiberförening eller företag.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (string.IsNullOrEmpty(newUserPassword.Text) || string.IsNullOrEmpty(newUserPassword2.Text))
{
NewUserResultBox.Text = "Du måste ange ett önskat lösenord och sedan återupprepa lösenordet.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (newUserPassword.Text.Length < 5)
{
NewUserResultBox.Text = "Angivet lösenord är för kort, var god ange ett längre och säkrare lösenord.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (newUserPassword.Text.Length > 50)
{
NewUserResultBox.Text = "Angivet lösenord är för långt, var god ange ett kortare lösenord.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else if (newUserPassword.Text != newUserPassword2.Text)
{
NewUserResultBox.Text = "Lösenorden överensstämmer inte, var god återupprepa lösenordet korrekt.";
NewUserResultBox.BoxTheme = ResultBox.ThemeChoices.ErrorTheme;
}
else
{
// Nu har vi tagit oss igenom kontrollen, skapa konto, tilldela rättigheter och ta bort inbjudan.
var newUser = new User()
{
Created = DateTime.Now,
Name = name.Text,
Description = description.Text,
Password = AdminMemberProvider.GeneratePasswordHash(invitation.Email.Trim().ToLower(), newUserPassword.Text),
Username = invitation.Email.Trim().ToLower(),
LastLoggedOn = DateTime.Now
};
fiberDb.Users.InsertOnSubmit(newUser);
var newMapAccessRight = new MapTypeAccessRight
{
User = newUser,
MapTypeId = invitation.MapTypeId,
AccessRight = invitation.AccessRight
};
fiberDb.MapTypeAccessRights.InsertOnSubmit(newMapAccessRight);
// Inbjudan är förbrukad.
fiberDb.MapAccessInvitations.DeleteOnSubmit(invitation);
fiberDb.SubmitChanges();
// Skapa inloggnings-kaka.
var ticket = new FormsAuthenticationTicket(1, newUser.Username, DateTime.Now, DateTime.Now.AddHours(5), false, string.Empty);
var cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket));
cookie.Expires = DateTime.Now.AddHours(5);
Response.Cookies.Add(cookie);
Utils.Log("Ny användare skapad! Id=" + newUser.Id + ", namn=\"" + newUser.Name + "\", username=\"" + newUser.Username + "\" med rättighet=" + newMapAccessRight.AccessRight + " till karta=" + newMapAccessRight.MapTypeId + ". Skapad från ip-adress \"" + Request.ServerVariables["REMOTE_ADDR"].ToString() + "\".", System.Diagnostics.EventLogEntryType.SuccessAudit, 103);
#region SendMail
// Skicka ett mail till admin för systemet så att dom vet att vi har fått en ny användare.
try
{
using (var mail = new MailMessage(
"noreply@fiberkartan.se",
ConfigurationManager.AppSettings.Get("adminMail"),
"Ny användare skapad i FiberKartan",
"Ny användare har skapats. Id=" + newUser.Id + ", namn=\"" + newUser.Name + "\", username=\"" + newUser.Username + "\" med rättighet=" + newMapAccessRight.AccessRight + " till karta=" + newMapAccessRight.MapType.Title + ". Skapad från ip-adress \"" + Request.ServerVariables["REMOTE_ADDR"].ToString() + "\"."
))
{
using (var SMTPServer = new SmtpClient())
{
SMTPServer.Send(mail);
}
}
}
catch (Exception exception)
{
Utils.Log("Misslyckades med att skicka mail angående skapandet av ny användare(id=" + newUser.Id+ "). Errormsg=" + exception.Message + ", Stacktrace=" + exception.StackTrace, System.Diagnostics.EventLogEntryType.Error, 152);
}
// Skicka ett bekräftelse mail till personen som skickade inbjudan, så att dom vet att mottagaren nu har registrerat ett konto och kan arbeta med kartan.
try
{
var invitationSentBy = fiberDb.Users.Where(u => u.Id == invitation.InvitationSentBy).FirstOrDefault();
if (invitationSentBy != null)
{
using (var mail = new MailMessage(
"noreply@fiberkartan.se",
invitationSentBy.Username,
"Användare har accepterat din inbjudan till att samarbeta kring karta.",
"Användaren " + newUser.Name + " med e-postadress " + newUser.Username + " har accepterat din inbjudan till att samarbeta kring kartan \"" + newMapAccessRight.MapType.Title + "\"."
))
{
using (var SMTPServer = new SmtpClient())
{
SMTPServer.Send(mail);
}
}
}
}
catch (Exception exception)
{
Utils.Log("Misslyckades med att skicka bekräftelsemail till inbjudaren angående att användaren(id=" + newUser.Id + ") accepterat inbjudan. Errormsg=" + exception.Message + ", Stacktrace=" + exception.StackTrace, System.Diagnostics.EventLogEntryType.Error, 152);
}
#endregion SendMail
if (string.IsNullOrEmpty(Request.QueryString["ReturnUrl"]))
{
Response.Redirect("ShowMaps.aspx");
}
else
{
Response.Redirect(Request.QueryString["ReturnUrl"]);
}
}
}
}
} | code |
Stockists of:Plastic Models. Kitsets. Models. Diecast. Diecast Models. Model Aircraft. Models cars. Model Motorbikes. Military Figures. Slot Cars. 1:25th. 1:48th. 1:144th. 1:35th. 1:72nd. Model Trucks. Trucks. Kitset Motorbikes. Motorcycles. Kitset Cars. Tanks. Models Tanks. Plastic Kitsets. Plastic Kitsets. Kitset Helicopters. Model Helicopters. Helicopters. SIFI. Science Fiction. Kitset Ships. Plastic Ships. Ships. Boats. Subs. Submarine. Army Figures. Vehicles. Scales. Railway, Plastic Kitset, Aircraft 1:32nd, 1:48th,1:72nd, 1:144th, Motorcycles, Helicopters 1:48th, 1:72nd, ilitary 1:35th, 1:72nd, Ships 1:350th, 1:700th, Space, Trucks 1:24th-1:25th, Model Rockets, Diecast Models, Radio Control Cars, Military Figures, Slot Cars, War Gaming. For all Your Hobby Needs. | english |
पुन्हाना, (गुरुदत्त भारद्वाज): मई माह का राशन न मिलने से नाराज पुन्हाना के एक वार्ड के लोगों ने डीपो होल्डर, जिला प्रशासन व फूड एंड सप्लाई अधिकारी के खिलाफ विरोध प्रदर्शन कर जमकर भड़ास निकाली। वार्ड के लोगों का आरोप है कि डीपो होल्डर ने मई माह का राशन अधिकारियों की मिलीभगत से बाजार में बेच दिया। विरोध प्रदर्शन के दौरान सैकडों कार्डधारक ने डीपो होल्डर व खाद्य एंव आपूर्ति विभाग के अधिकारियों के खिलाफ जमकर नारेबाजी की।
विरोध प्रदर्शन में भाग लेने वाले धर्मपाल बख्शी,संजय बडगुजर, भोलू पूर्व सरपंच, जगदीश मेम्बर, जिले सिंह, राजेश, राकेश, नरेश, भरतलाल, गोरधन, कालीचरण, अशोक देवेन्द्र महेन्द्र सुरेश, भगवान आदि लोगों का कहना है की उन्होंने राशन ना मिलने की शिकायत उपायुक्त व खाद्य एंव आपूर्ति विभाग से की । लेकिन डीपो होल्डर पर अभी तक कोई कार्यवाही नहीं की गई। लोगों ने बताया कि जब भी कोई विभागीय कार्यवाही होती है तो उन्हें पूछा तक नहीं जाता। कार्यवाही गुपचुप तरीके से अधिकारियों की मिलीभगत से की जाती है।
लोगों ने बताया कि डीपो होल्डर ने आज तक कभी भी कार्डधारको को चीनी तक नहीं दी। इतना ही नहीं गेंहू से लेकर दाल तक भी साल भर में पांच छ महिने ही दी जाती है। कार्डधारकों ने बताया कि जब भी उन्होंने डीपो होल्डर से राशन के बारे में बात की तो उसने साफ कह दिया की साल भर में राशन तो छ महिने ही मिलेगा हमें उपर भी अधिकारियों को देना पड़ता है। लोगों ने बताया कि डीपो होल्डर की बातचीत की रिकार्डिग भी उनके पास है।
कार्डधारकों ने बताया कि सारा खेल स्थानिय खाद्य एंव आपूर्ति विभाग के अधिकारियों की मिलीभगत के कारण चल रहा है जो मोटे कमीशन के लालच में गरीब लोगों के हलक से निवाला छीन रहे है। आपकों बता दें की पुन्हाना शहर में ही नहीं बल्कि गावों में पिछलें दो महिने से राशन नहीं दिया गया है। अधिकारियों को सारे मामले की जानकारी है लेकिन मोटे कमीशन के लालच में कार्यवाही नहीं हो पाती। प्रदर्शन कर रहे लोगों ने जिला प्रशासन से मांग कि की उक्त डीपो होल्डर की सप्लाई बंद कर इसकी जांच कराई जाए। | hindi |
This information is relevant for existing Premium, Transitional or "one-for-one" Standard Feed-in Tariff customers.
If you applied for the Premium, Transitional or Standard schemes and your application was not accepted prior to the closure of these schemes, you are unlikely to be a customer under one of these schemes and the following information will not apply to you. In this case you are likely to be receiving the current feed-in tariff, that is presently set until 1 July 2019 as a single-rate minimum feed-in tariff of 9.9 cents per kilowatt hour or a time-varying feed-in tariff. This is the only scheme in Victoria open to new applicants.
Can I upgrade my system size?
No. Existing customers under the PFIT or TFIT scheme will forfeit their access to their respective schemes if they add generating capacity to their solar system through additional panels. The capacity of the panels must remain the same or less than what they were before the scheme closed to new applicants.
If customers do, however, still wish to add panels to their original generating capacity they will be able to move to the current feed-in tariff. From 1 July 2018, customers on the current minimum feed-in tariff have been receiving either a single-rate minimum feed-in tariff of 9.9 cents per kilowatt hour or a time-varying feed-in tariff.
This applies to the generating capacity of the panels rather than the inverter. While customers are allowed to install inverters that are larger than the inverter installed at the time of the scheme closure, it is the installation of extra panels that will make a customer ineligible for the PFIT or TFIT schemes. It should be noted that an oversized inverter will not increase the generating capacity of your solar system, and in some instances may actually decrease its generating capacity.
If you wish to add generating capacity to your renewable energy system, you will need to discuss this with your electricity retailer. Your existing agreement is likely to include terms and conditions relating to the generating capacity of your renewable energy system.
If you are considering installing a different type of renewable energy system to the type you already have installed, you should contact your retailer to discuss your options.
If I move out of my residence, can I take my system with me and claim the Premium, Transitional or Standard Feed-in Tariff?
No. The Premium, Transitional and Standard Feed-in Tariff schemes are linked to the property where the system was installed. You cannot move the system to a new address and continue to access the respective scheme.
However, if the system remains on the property, under the Premium and Transitional schemes the new owners or tenants would be eligible to access the same scheme as the previous owners. You will not be able to retain the "one-for-one" Standard Feed-in Tariff if you change address. See the next question for further information.
If I move into a house with solar panels or a renewable energy system which was previously signed up to the Premium, Transitional or Standard feed-in tariff, will I be able to access the scheme?
Yes, you will be able to arrange with a retailer to receive the PFIT or TFIT if you move into a property already accessing the scheme, as long as you do not increase the generating capacity of the existing system. If you add solar panels to the existing system you will forfeit your access to the scheme.
Both of these schemes are linked to the property where the system was installed when the previous owners or tenants applied for the scheme. This means that any house that is signed up for the PFIT or TFIT scheme can continue to access the respective rate until the end date of the scheme, provided other eligibility criteria are maintained. The PFIT ends on 1 November 2024 and the TFIT ends on 31 December 2016.
No, you will not be eligible to access the 'one-for-one' rate under the Standard Feed-in Tariff if you move into a property already accessing the scheme. The 'one-for-one' rate under the Standard Feed-in Tariff is a negotiated agreement between the customer and the retailer, which relates only to that particular customer and the property where the renewable energy system was installed when the agreement was entered into. So if you move into a house which was previously signed up to the Standard Feed-in Tariff, you will not be eligible for the scheme.
Yes, if you wish to change retailers, you are able to arrange access to the respective scheme with your new retailer. However, make sure you research the rates and other terms and conditions offered by other electricity retailers prior to changing. Some retailers will offer a "top up" over and above the regulated minimum feed-in tariff rate. Not all retailers offer a "top up" or the same "top up". It is recommended that you consider the whole package being offered by the retailer, including supply rates, rather than just whether a "top up" rate is available to ensure you make the best choice for your circumstances.
You should also note that only retailers with more than 5,000 customers are required to offer the Premium Feed-in Tariff (PFIT) and Transitional Feed-in Tariff (TFIT), although retailers with fewer than 5,000 customers may choose to offer these rates.
Under the PFIT, retailers must offer a minimum of 60 cents per kilowatt hour for excess electricity exported into the grid.
Under the TFIT, retailers must offer a minimum of 25 cents per kilowatt hour for excess electricity exported into the grid.
No. The 'one-for-one' rate under the Standard Feed-in Tariff is a negotiated agreement between the customer and the retailer. Now that the scheme is closed to new applicants, retailers will no longer be offering this rate to new customers.
Therefore, if you are receiving the SFIT and decide to change retailers the existing feed-in tariff contract will end and you will be ineligible to receive the 'one-for-one' Standard Feed-in Tariff. You may be eligible to receive the feed-in tariff for new applicants. Until 1 July 2019, customers on the current minimum FiT will receive either a single-rate minimum FiT of 9.9 cents per kilowatt hour or a time-varying FiT.
Can I put in a dedicated load if I am receiving the PFIT / TFIT / SFIT?
A dedicated load is metered separately to the household's general usage. So the electricity generated by your solar PV or other renewable energy facility is not offset against the dedicated usage, only your general household usage. Therefore, your energy consumption and generation would not be net metered and all Victorian feed-in tariff schemes require net metering. This means that electricity consumption in the household must be offset against all solar PV or renewable energy generation. This can be compared with gross metering where generation from the panels does not offset your own usage. Customers are usually better off financially under net metering arrangements as offsetting your own use has the greatest value.
Speak to your retailer if you are considering installing a dedicated load. There may be other options available to you.
What happens if my property with PFIT / TFIT / SFIT is no longer my primary place of residence?
You will no longer be eligible for the scheme. When these schemes were open to new customers, they were only available to residential consumers who applied for their primary place of residence. If the property is no longer your primary place of residence, you will forfeit your access to the scheme.
If you have moved house, the new owners of the property can access the respective scheme by signing a new contract with their retailer.
If you have tenants moving into this property, they will be able to access the respective scheme.
You should discuss your options with your electricity retailer.
You should discuss this with your retailer as your feed-in tariff and usage rates may be adjusted according to your existing agreement.
Can I replace different components of my system?
It is important that you do not increase the generating capacity of your existing system otherwise you will forfeit your access to the scheme.This applies to the generating capacity of the panels rather than the inverter. While customers are allowed to install inverters that are larger than the inverter installed at the time of the scheme closure, it is the installation of extra panels that will render a customer ineligible for the PFIT or TFIT schemes. It should be noted that an oversized inverter will not increase the generating capacity of your solar system, and in some instances may actually decrease its generating capacity.
If your system generating capacity may increase with the replacement of any components, please discuss this with your retailer to ensure you do not lose your eligibility under the scheme.
If I am renovating my house/demolishing my house/building a new house on my property, can I continue to access the same scheme?
Generally, this will be allowed, but there are some restrictions and you will need to discuss your circumstances with your electricity retailer. Your retailer will be able to advise you on the best approach according to your situation.
It is recommended you place your panels into storage for the duration of the work being completed at your property to prevent any damage to your solar or other renewable energy system and to ensure that net metering arrangements continue at your property. If your system is attached to the house and continues to operate whilst it is not your primary place of residence (for instance, you are not staying in the house during the work) you may forfeit your access to the PFIT or TFIT scheme.
You also cannot move your system to a new property and reclaim the PFIT, TFIT or SFIT. This is because the feed-in tariffs are linked to the property rather than the system.
If you are replacing any equipment, you must ensure that the system's generating capacity after the building work is complete is not higher than when you registered for the Premium or Transitional scheme. If your system's generating capacity increases, you will forfeit your access to these schemes.
Your feed-in tariff relies on your electricity meter. If this meter is damaged during the building work and a new meter is required you will still retain access to the relevant scheme. Contact your electricity retailer to discuss your arrangements and to ensure you do not forfeit your access to the scheme.
Please ensure that the necessary reinstallation work is completed by a licensed electrical contractor. This will ensure that the required paperwork to maintain your eligibility under the scheme is completed and will also ensure the safety of your system.
Can I remain on my existing feed-in tariff scheme if my house burns down and I want to replace the solar panels?
Yes. This is assuming that you had originally qualified for the feed-in scheme before its relevant closure date, your new house at the same property is still your primary place of residence (for PFIT and TFIT), you maintain net metering arrangements, and you do not increase the capacity of your original solar panels.
I am receiving the Premium, Transitional or Standard Feed-in Tariff at my property. Can I also claim the current Feed-in Tariff, at the same property?
f you are installing a new solar or other type of renewable energy system to access the current Feed-in Tariff, you need to ensure that your existing system and the new system are separately metered (i.e. the system receiving PFIT, TFIT or "one-for-one" SFIT continues to be net metered). This includes if you are installing a new system on the same property. You should discuss your plans with your electricity retailer.
If you are accessing the PFIT, TFIT or SFIT you cannot move your existing system from your current property where you are receiving credits for the excess electricity you generate.
For the system you are receiving the rate for, you must ensure this continues to be your primary place of residence.
You cannot add generating capacity to your existing system if you are already accessing the PFIT or TFIT scheme.
If you wish to add generating capacity to your system, you will need to discuss this with your electricity retailer. The only size restriction for customers receiving the standard rate is that the system must be less than 100 kilowatts in generating capacity. Your existing agreement is likely to include terms and conditions relating to the generating capacity of your renewable energy system. You should also check with your electricity distributor if they have any pre-approval checks you will need to undertake before considering upgrading your system size. | english |
रणनीति के तहत अशोक गहलोत के रूप में चल सकते हैं राहुल गांधी नया दांव! - विंडो तो न्यूज
अशोक गहलोत बन सकते हैं कांग्रेस के राष्ट्रीय अध्यक्ष!
गांधी परिवार के वफ़ादार अशोक गहलोत को मिल सकती है कांग्रेस अध्यक्ष की कमान
जून २० (त्न) लोकसभा चुनाव में करारी हार के बाद कांग्रेस में 'मंथन' जारी है। २०14 और २०19 के लोकसभा चुनावों में जिस तरह से कांग्रेस को हार का सामना करना पड़ा है, उससे राहुल गांधी के नेतृत्व पर सवाल उठने लगे हैं। दस साल की सत्ता के बाद कांग्रेस को साल २०14 के लोकसभा चुनाव में सत्ता विरोधी लहर का सामना करना, और कांग्रेस सिर्फ़ ४४ सीटों पर ही जीत हासिल कर सकी थी। २०19 के लोकसभा में कांग्रेस को 'आशा' थी कि मोदी सरकार को भी सत्ता विरोधी लहर का सामना करना पड़ेगा और उसे हार नसीब होगी। लेकिन कांग्रेस ने जो सोचा वो नहीं हुआ, और २०19 के लोकसभा चुनाव में कांग्रेस सिर्फ़ ५२ सीटों पर ही सिमट गई।
इस लोकसभा चुनाव में करारी हार के बाद कांग्रेस अध्यक्ष राहुल गांधी ने 'नैतिक ज़िम्मेदारी' लेते हुए कांग्रेस अध्यक्ष पद से इस्तीफ़े की पेशकश की थी। लेकिन जैसा कि आप जानते हैं कि कांग्रेस पार्टी में गांधी परिवार ही पार्टी में 'एकता' की धुरी है। ऐसे में तमाम कांग्रेसी नेताओं ने 'एकसुर' से राहुल गांधी ने कांग्रेस अध्यक्ष बने रहने की गुजारिश की थी, लेकिन राजनीति में धीरे-धीरे 'परिपक्व' होते राहुल गांधी की 'रणनीति' कुछ और ही लगती है इसलिए उन्होंने अध्यक्ष बने रहने से 'इनकार' कर दिया।
सूत्रों से मिली जानकारी के मुताबिक़, राजस्थान के मुख्यमंत्री अशोक गहलोत जल्द ही कांग्रेस के नए राष्ट्रीय अध्यक्ष बन सकते हैं। यदि ऐसा होता है तो यह काफ़ी लम्बे समय बाद होगा जब कांग्रेस पार्टी का राष्ट्रीय गांधी परिवार से नहीं होगा। वैसे कांग्रेस में कई नेता हैं जो कि अशोक गहलोत से ज़्यादा वरिष्ठ और अनुभवी हैं, लेकिन अशोक गहलोत को कांग्रेस अध्यक्ष बनाने के पीछ कई 'कारण' हैं।
दरअसल, सबसे बड़ी बात है कि राहुल गांधी किसी ऐसे व्यक्ति को कांग्रेस पार्टी का राष्ट्रीय अध्यक्ष बनाना चाहेंगे जो कि गांधी परिवार के प्रति 'वफ़ादार' हो साथ ही बाक़ी सभी नेताओं को भी साथ लेकर चल सके। इस मामले में यदि अशोक गहलोत की बात की जाए तो साफ़ है कि वे गांधी परिवार के सबसे क़रीबी नेताओं में से एक हैं। गहलोत ने गांधी परिवार की तीन पीढ़ियों के साथ काम किया है। अशोक गहलोत इन्दिरा गांधी, राजीव गांधी और नरसिम्हा राव सरकार में केन्द्र में मंत्री रह चुके हैं। यानी कि साफ़ है कि वे राहुल गांधी के पिता और दादी के साथ काम कर चुके हैं तो ऐसे में राहुल गांधी को उन पर 'काफ़ी भरोसा' होगा।
वहीं कांग्रेस पार्टी में गहलोत को एक 'साफ़' छवि वाले नेता के रूप में जाना जाता है। विपक्ष के आरोपों को छोड़ दिया जाए, तो इतने सालों की राजनीति में अशोक गहलोत पर अभी तक कोई 'गम्भीर आरोप' नहीं लगा है और वे हमेशा से विवादों से दूर भी रहे हैं। एक नेता के रूप में कांग्रेस पार्टी में उनका 'रुतबा' काफ़ी बढ़ा है। गुजरात विधानसभा चुनाव में जिस तरह से 'रणनीति' बनाकर उन्होंने भाजपा को कड़ी टक्कर दी थी, उससे राहुल गांधी उनसे काफ़ी 'प्रभावित' हैं।
'अनुभव' के मामले में भी अशोक गहलोत कांग्रेस के काफ़ी नेताओं पर भारी हैं। अशोक गहलोत राजस्थान के तीन बार मुख्यमंत्री बन चुके हैं। साल २०१८ के विधानसभा चुनाव में अशोक गहलोत ने राजस्थान में कांग्रेस की जीत में अहम भूमिका निभाई थी। राष्ट्रीय स्तर पर भी कांग्रेस में गहलोत काफ़ी काम कर चुके हैं, जिसके कारण गांधी परिवार को उन पर काफ़ी 'भरोसा' है।
देखा जाए तो अशोक गहलोत का राजनीतिक सफ़र ४० साल से भी ज़्यादा का है। राहुल गांधी चाहेंगे कि 'संकट' से दौर से गुजर रही कांग्रेस की कमान ऐसे व्यक्ति को सौंपी जाए, जो वरिष्ठ, अनुभवी और सक्रिय हो। यानी कि राहुल गांधी के 'सम्भावित पैमाने' पर अशोक गहलोत खरे उतरते नज़र आ रहे हैं। कांग्रेस पार्टी में वैसे तो अध्यक्ष पद के कई दावेदार हैं, लेकिन राहुल गांधी, अशोक गहलोत को पार्टी का राष्ट्रीय अध्यक्ष बनाकर उनकी 'साफ़ छवि' का फ़ायदा उठाना चाहेंगे।
गांधी परिवार के 'क़रीबी' अशोक गहलोत यदि कांग्रेस अध्यक्ष बनते हैं तो यह राहुल गांधी के लिए काफ़ी 'फ़ायदेमंद' रहेगा। सबसे पहले तो कांग्रेस अध्यक्ष पद की ज़िम्मेदारी से 'मुक्त' होकर राहुल गांधी पूरे देश में घूमकर एक बार फ़िर से कांग्रेस को 'मज़बूत' करने का काम कर पाएंगे, जिसमें अनुभवी अशोक गहलोत उनका 'मार्गदर्शन' कर सकते हैं। वहीं अशोक गहलोत यदि कांग्रेस अध्यक्ष बनते हैं तो यह तय है कि वे हर 'फ़ैसले' गांधी परिवार को 'विश्वास' में लेकर ही लेंगे, ऐसे में राहुल गांधी का 'अप्रत्यक्ष' रूप से पार्टी पर 'नियंत्रण' भी रहेगा। यानी कि कहा जा सकता है कि अशोक गहलोत को कांग्रेस का राष्ट्रीय अध्यक्ष बनाना राहुल गांधी का एक 'सूझबूझ' भरा फ़ैसला होगा।
दो दिवसीय अंतर्राष...
यदि यूट्यूब की धीम...
यदि यूट्यूब की धीमी गति से हैं परेशान तो अपनाएं यह तरीक़े | hindi |
۱۳۸۲۱۴ پِن کوڈ تہٕ ریڈِیو تٔھرپی خدمت پیش کرن وٲلِس علاقس مَنٛز واقع ایی ایس آٮٔۍ سی سینٹر ژھار | kashmiri |
\begin{document}
\title{Quantum walk transport on carbon nanotube structures}
\author{J. Mare\v s, J. Novotn\'y, I. Jex}
\affiliation{Department of Physics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, B\v rehov\'a 7, 115 19 Praha 1 - Star\'e M\v esto, Czech Republic}
\textrm{d}ate{\today}
\begin{abstract}
We study source-to-sink excitation transport on carbon nanotubes using the concept of quantum walks. In particular, we focus on transport properties of Grover coined quantum walks on ideal and percolation perturbed nanotubes with zig-zag and armchair chiralities. Using analytic and numerical methods we identify how geometric properties of nanotubes and different types of a sink altogether control the structure of trapped states and, as a result, the overall source-to-sink transport efficiency. It is shown that chirality of nanotubes splits behavior of the transport efficiency into a few typically well separated quantitative branches. Based on that we uncover interesting quantum transport phenomena, e.g. increasing the length of the tube can enhance the transport and the highest transport efficiency is achieved for the thinnest tube. We also demonstrate, that the transport efficiency of the quantum walk on ideal nanotubes may exhibit even oscillatory behavior dependent on length and chirality.
\mathrm{e}nd{abstract}
\keywords{Quantum transport; Quantum walk; Carbon nanotube; Localization phenomena; Percolation.}
\maketitle
Discrete coined quantum walks (CQWs) have become a standard tool in studying transport phenomena in the quantum domain \cite{review}. Over the last two decades, quantum walker's behavior has been analyzed in the context of recurrence phenomena \cite{Stefanak2008,Werner2013}, state transfer \cite{Tanner2009,Skoupy2016}, speed of wave packet propagation \cite{Aharonov2001}, hitting times \cite{Magniez}, or e.g. topological phenomena \cite{Asboth2012}. Investigations, aiming first at the quantum walker on the line, have gradually broadened the scope of their interest to different graph geometries like e.g. cycles \cite{Aharonov2001}, hypercubes \cite{Moore2002,Portugal2008}, trees \cite{Segawa2009}, honeycombs \cite{Lyu2015,Kendon2016}, spidernets \cite{Segava2013} or fractal structures \cite{Lara2013} (for more see review \cite{review}).
Analysis of complex graph structures has revealed that if vertices with degree at least three are present, the walker's dynamics may exhibit localisation originating from the presence of so-called trapped states \cite{Inui2004,inui:psa,inui:grover1,miyazaki,watabe,falkner,machida}. They appear in various quantum systems and are known, in a different context, also as localized invariant or dark states \cite{Fleischhauer2000,Poltl2009,Creatore2013,Mendoza2013}. These are eigenstates of the given dynamics, whose support does not spread over the whole position space. Due to that, the initial states having overlap with the trapped states can not fully propagate through the medium and the efficiency of quantum transport may be significantly reduced \cite{Rebentrost2009,Chin2010}.
On the other hand, trapped states were found to be fragile with respect to certain decoherence mechanisms arising in the presence of random external perturbations \cite{Caruso2009}. When the quantum walker moves on a changing graph whose edges are randomly and repeatedly closed and open again, we arrive at so-called dynamically percolated coined quantum walks (PCQWs) \cite{asymptotic1,asymptotic2} capable to destroy some trapped states of the original non-percolated CQW \cite{assisted_transport}. Recently, it was shown how the underlying geometry of Grover PCQW controls the structure of walker's trapped states \citep{theory}. The detailed analytical recipe for constructing the basis has essentially contributed to identify interesting counterintuitive quantum transport phenomena \citep{transport_effects}. In particular, studying Grover PCQWs on the ladder graph and Cayley trees demonstrated that by increasing the transport distance or adding redundant blind branches one can eventually enhance excitation source-to-sink quantum transport. Since all trapped states of PCQWs are trapped states of their non-percolated versions, these counterintuitive transport phenomena apply to CQWs as well.
\begin{figure}[th]
\centering
\includegraphics[width=160 pt]{img/chirality.png}
\caption{(color online) Schematic explanation of the chirality of nanotubes: the basis vectors $(1,0)$ and $(0,1)$, chirality vectors of two example tubes: $(6,0)$ chirality vector as a solid green arrow and $(3,3)$ vector as a dotted red arrow with the corresponding orthogonal vectors determining two basal length segments of tubes. Vertices contained in tubes are shown (identified by circles). Note that in the case of $(n,0)$ tubes we subsequently cut out the vertices of degree 1 (white) before using the structure for a quantum walk. Tube length scale is shown using the distance of neighboring nodes as the unit of length.}
\label{fig:chirality}
\mathrm{e}nd{figure}
In this paper we explore transport properties of quantum walks on realistic carbon nanotube structures. Related to the above, we focus primarily on excitation source-to-sink quantum transport for Grover PCQWs and the analytically obtained findings are subsequently numerically checked for Grover CQWs. Recently, the speed of excitation transport for the Grover CQW on carbon nanotubes was numerically studied for one particular initial state, which is the only one that exhibits full source-to-sink transport \cite{Kendon2016}. Unfortunately, numerical simulations do not constitute an efficient tool for unravelling a delicate skein of different regimes of walker's behavior, when no analytical insight is available. Here we show that carbon nanotubes share a similar structure of trapped states with the ladder graph. Consequently, these nanotubes, despite their significantly higher complexity compared to the ladder, also exhibit the enhancement of the source-to-sink transport by extending the transport distance. Moreover, the shape of important trapped states attached to open nanotube ends is controlled by a characteristic structure parameter of carbon nanotubes - the chirality. This geometrical aspect of carbon nanotubes splits transport efficiency into different quantitative branches whose mutual gaps can be estimated even from these non-orthogonal trapped states. In PCQWs, the diameter of the tube further generates a subtle quantitative separation within these branches following the rule that thinner tubes exhibit a better transport. In CQWs this behavior is more complex but also here the thinnest nanotube shows the best transport. Moreover, due to a richer family of trapped states we can even identify oscillatory behavior of the overall transport efficiency for a certain type of the sink.
Let us introduce a general model of walker's dynamics refined for the case of carbon nanotubes. We adopt the definition of a coined quantum walk presented in \citep{theory} as it is able to capture simultaneously the complexity of carbon nanotube structures and random disturbances of the underlying medium (percolation).
A quantum walk is defined on a pair of finite graphs. The first undirected structure graph $G(V,E)$ describes the geometry of the underlying walker's medium. The set of discrete vertices (nodes) $V$ represents all possible walker's positions on the nanotube. The walker can travel among vertices in both directions along undirected edges from $E$. In our case the structure graph is a coiled strip of the honeycomb lattice. The way how we cut out the strip of the honeycomb lattice imprints basic shape to the carbon nanotube and it is given by the chirality vector $(m,n)$, where $m$ and $n$ are positive integers. The numbers $m$ and $n$ are coefficients in a non-orthogonal basis defined on a honeycomb lattice as shown in figure \ref{fig:chirality}. The circumference of the tube is given by associating the beginning of the chirality vector with its end-point. The orthogonal vector represents the axial direction of the tube. For any chirality there is a basic length segment, which is repeated in the axial direction.
In this work we study transport properties for two distinct classes of nanotubes, namely $(n,n)$ "armchair" tubes (Fig. \ref{fig:tubes_sink_init} (a)) and $(n,0)$ "zig-zag" tubes (Fig. \ref{fig:tubes_sink_init} (b)). Nevertheless, the general results are mostly valid for tubes with arbitrary chirality. The structures are generated using TubeGen 3.4 \citep{tubegen}.
\begin{figure}
\centering
\includegraphics[width=180 pt]{img/tubes_sink_init.png}
\caption{Examples of the initial subspace (green dotted arrows) and sink subspace (red dashed arrows) variants: loops type shown on a $(3,3)$ tube (a) and one-vertex type shown on a $(6,0)$ tube (b).}
\label{fig:tubes_sink_init}
\mathrm{e}nd{figure}
For the description of walker's state space and evolution we also introduce a directed state graph $G^{(d)}(V,E^{(d)})$. The set of vertices is the same as in the structure graph and we assign to each undirected edge $e = \{v_1,v_2\}\in E$ in the structure graph $G$ two directed arcs $e_1=(v_1,v_2), e_2=(v_2,v_1) \in E_p^{(d)}$ connecting the vertices in both directions. Therefore, each inner vertex of the state graph associated with the nanotube has three outgoing and three incoming directed edges. In order to make the whole state graph 3-regular, we add self-loops $e_l \in E_l^{(d)}$ beginning and ending in the same vertex which belongs to one of the open ends of the tube. An example of both graphs is depicted in Fig. \ref{fig:structure_and_state}. Each directed edge (including self-loops) $(v_1, v_2) \in E_l^{(d)}$ is then associated with a base state of an orthonormal basis and represents the walker standing in the vertex $v_1$ and facing towards $v_2$. Further, for every vertex $v\in V$ we denote the vertex-subspace of states corresponding to edges beginning in $v$ as $\mathscr{H}_v$. Again, see the example of the notation introduced above in Fig. \ref{fig:structure_and_state}. The whole walker's Hilbert space $\mathscr{H}$ can be written as
\begin{align}
\mathscr{H} = \mathrm{span}\left(\ket{e^{(d)}} | e^{(d)} \in E^{(d)} \cup E_l^{(d)} \right) = \bigoplus_{v \in V} \mathscr{H}_v.
\mathrm{e}nd{align}
\begin{figure}
\centering
\includegraphics[width=140 pt]{img/structure_and_state.png}
\caption{Example of both, structure and state, graphs demonstrated on the end of the $(3,3)$ armchair nanotube. Four vertices $v_1$ to $v_4$ are connected by three undirected edges $A$, $B$ and $C$ from the structure graph. For all of them we have three pairs of directed arcs $a_1$, $a_2$, $b_1$, $b_2$, $c_1$ and $c_2$ and additionally two self-loops $\alpha$ and $\beta$ from the state graph. The Hilbert space of the corresponding walk is $\mathscr{H}=\mathrm{span}(\ket{a_1},\ket{a_2},\ket{b_1},\ket{b_2},\ket{c_1},\ket{c_2},\ket{\alpha},\ket{\beta},\ldots)$ and e.g. the vertex subspace $\mathscr{H}_{v_2}=\mathrm{span}(\ket{a_2},\ket{b_1},\ket{\alpha})$.}
\label{fig:structure_and_state}
\mathrm{e}nd{figure}
Each step of discrete CQW evolution is realized by a subsequent application of three operators. First, the reflecting shift operator $R$ (also called flip-flop) displaces the walker by swapping pairs of amplitudes corresponding to paired arcs $R\ket{(v_1,v_2)}= \ket{(v_2,v_1)}$ and it leaves the amplitudes corresponding to self-loops unchanged. Then the Grover coin operator $C$ acts locally in each vertex subspace as the Grover matrix
\begin{align*}
G_3 &=
\frac{1}{3}\left[
\begin{array}{rrr}
-1 & 2 & 2 \\
2 & -1 & 2 \\
2 & 2 & -1 \\
\mathrm{e}nd{array}
\right].
\mathrm{e}nd{align*}
The simultaneous choice of the Grover coin and the reflecting shift operator results in a rich family of trapped states of CQWs and PCQWs. Their general structure as well as the effect of the order of $C$ and $R$ is discussed in \citep{theory}. Finally, at the end of each step the part of the walker's wave function which reached the sink is taken away. Our aim is to study excitation transport from an initial region to a target region represented by the sink. For a given walk the sink is defined as a set of directed edges in the state graph and the span of the associated states constitutes the sink subspace $\mathscr{H}_s$. Note that this definition includes the setting with no edge in the sink set, which is referred as a quantum walk without sink. We introduce the projector onto the sink subspace $\Pi_{\mathscr{H}_s}$ and its complement $\Pi = I-\Pi_{\mathscr{H}_s}$. Thus, each step is finished by a projective measurement of the walker's wave function on the complement of the sink subspace. Overall, one step of the CQW maps the walker from a state $\rho(t)$ to the state
\begin{equation}
\label{step_percolated_sinked}
\rho(t+1)=\Pi C R \rho(t) R C \Pi.
\mathrm{e}nd{equation}
The PCQW incorporates into its dynamics the so called dynamical percolation \citep{asymptotic1}. In each its step a subset of open edges $K \subset E$ is chosen randomly. Only those can be traversed by the walker. The remaining edges are closed in both directions. This results in a different reflecting shift operator $R_K$ for every configuration of open edges $K$. The arcs on closed edges are treated as loops - their amplitudes are not swapped. As the actual configuration of open edges is not under control, the resulting step of the PCQW evolution takes into account all of them and reads
\begin{equation}
\label{step_percolated}
\rho(t+1)=\sum_{K\subset E}\pi_K \Pi C R_K \rho(t) R_K C \Pi,
\mathrm{e}nd{equation}
where $\pi_K$ denotes the probability distribution of these configurations.
The evolution of CQWs and PCQWs is not, in general, trace preserving and $p(t) = \operatorname{Tr}\left( \rho(t) \right)$ expresses the probability of the walker being still away from the sink. In contrast to the classical random walk the quantum walker is able to avoid the sink indefinitely and we can quantify the overall ability of the quantum system to transport the excitation initiated in the state $\rho(0)$ into the sink by asymptotic transport probability (ATP)
\begin{equation}
\label{def_efficiency}
q(\rho(0)) = 1-\operatorname{Tr}\left(\lim_{t \rightarrow +\infty} \rho(t) \right).
\mathrm{e}nd{equation}
The complement $p=1-q$ is the trapping probability. It is due to the presence of trapped states whose overlap with the walker's initial state determines the ATP \citep{theory,assisted_transport}. The structure of trapped states is determined not only by chosen nanotube parameters (its length and chirality) but also by the choice of the sink. Indeed, trapped states of CQWs or PCQWs with a sink are all trapped states of the corresponding quantum walk without sink, which additionally have zero overlap with the sink subspace. They are called sr-trapped states ("sink-resistant") to distinguish them from trapped states of the corresponding quantum walk without sink. A different choice of the sink results in a different selection of sr-trapped states from the set of trapped states.
Our intention is to study ATP for different choices of sink and initial state $\rho(0)$. Similarly to the sink subspace we choose a subset of directed edges to define the initial subspace $\mathscr{H}_i$. Once the initial subspace $\mathscr{H}_i$ is chosen, each walker's initial state has non-zero amplitudes only in the initial subspace. We primarily study ATP averaged over all possible initial states from a given initial subspace, which due to linearity can be expressed as $\overline{q}= q(\overline{\rho})$ with $\overline{\rho}$ being the maximally mixed state on the initial subspace. For convenience of the argument we place the tube vertically with the walker initiated at the bottom and the sink at the top. In order to obtain a representative picture of quantum transport on nanotubes we use two variants of the sink and initial subspaces. In the "one-vertex"~variant (Fig. \ref{fig:tubes_sink_init} (b)) the subspace coincides with the vertex subspace of one chosen end vertex, e.g. $\mathscr{H}_i=\mathscr{H}_{v_2}=\mathrm{span}(\ket{a_2},\ket{b_1},\ket{\alpha})$ in Fig. \ref{fig:structure_and_state}. The second is "loops"~variant (Fig. \ref{fig:tubes_sink_init} (a)), where the subspace is formed by all loop states at one end of the tube, e.g. $\mathscr{H}_i=\mathrm{span}(\ket{\alpha},\ket{\beta},\ldots)$ in Fig. \ref{fig:structure_and_state}. Thus we consider that the walker can enter (leave) the tube either at one chosen bottom (top) vertex or via bottom (top) loops. These two variants define four different regimes of transport, which we refer to as vertex-to-vertex, vertex-to-loops, loops-to-vertex, and loops-to-loops transport.
Let us analyze how the geometry parameters of nanotubes and different transport regimes of affect ATP behavior. This requires to construct basis of sr-trapped states for each investigated setting.
To proceed, we exploit a general recipe given in \citep{theory} allowing to construct a basis of trapped states for the reflecting Grover percolated coined quantum walk on an arbitrary simple planar 3-regular state graph.
All trapped states correspond to eigenvalue $-1$ and they form a subspace of the dimension $N = 2\#V-\#E$. A basis of this subspace can be constructed from four types of trapped states shown in Fig. \ref{fig:trapped_construction}. For detailed description of these states see \citep{theory}.
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/trapped_construction.png}
\caption{The four types of trapped eigenstates: (a) A-type state on an even-edged face, (b) B-type state connecting two odd-edged faces by a connecting path, (c) C-type state connecting two loops by a path and (d) D-type state connecting an odd-edged face and a loop by a path. Dashed lines represent continuation of the graph where all vector elements of the given trapped state are zeros.}
\label{fig:trapped_construction}
\mathrm{e}nd{figure}
The construction of the basis relies on faces, which are defined in planar graphs. A planar embedding of our tube can be obtained by first transforming it from a cylinder to a truncated cone by sufficiently extending the top end and then projecting it in the axial direction into the plane. The former top end now forms the so called outer face - the rest of the plane outside of the graph. Further, there are all the hexagonal faces and the last face originating from the bottom end of the tube. The "bottom"~and "top"~faces have $4n$ and $2n$ edges in $(n,n)$ and $(n,0)$ tubes respectively and the number is even also for all other chiralities \footnote{ The base vectors shown in Fig. \ref{fig:chirality} always connect vertices separated by two edges. Since the lattice is hexagonal, also any other path connecting the origin and the site given by the chirality vector consists of even number of edges. (Always, $k$ edges can be replaced by $6-k$ others in the path.) As the bottom and top faces originate from such paths, they have even number of faces regardless of the particular chirality vector.}. We find all faces of the planar embedding are even-edged. Therefore, to construct basis of trapped states we need only A-type and C-type states here.
According to the recipe we should first include into the basis one A-type state for every face of our graph (except the outer face) and add C-type states connecting one chosen and fixed loop to all the other loops by arbitrary connecting paths. Nevertheless, we can create a more convenient basis using different linear combinations of C-type states in our particular case. We use C1-type states as shown in Fig. \ref{fig:trapped_states_both} (b) connecting all pairs of closest loops on ends of the tube except one on every end. Finally we add one C2-type "connecting state". One can see that for a tube with $n$ loops on every end we just replace $2n-1$ trapped states with the same number ($(n-1)+(n-1)+1$) of others, which are linear combinations of the original ones. Note, the two left out C1-states and the loops in the connecting state can be chosen arbitrarily. When we choose trapped states we try to avoid their overlap with the sink as much as possible (one vertex sink). This simplifies the subsequent selection of sr-trapped states and their basis, which have zero overlap with the sink subspace as detailed in \citep{theory}. We can just remove the trapped states overlapping with the sink subspace.
The obtained basis is typically non-orthogonal and has to be orthonormalized (possibly numerically).
\begin{figure}
\centering
\includegraphics[width=190 pt]{img/trapped_states_both.png}
\caption{The three types of trapped states needed for PCQWs on nanotube structures: (a) A-type states on even-edged faces (blue), (b) C1-type "short path"~state (red) and C2-type "connecting path"~state (green), and an additional A'-type eigenstate for non-percolated CQWs (purple) also in (b) all shown on a $(6,0)$ nanotube structure with two length segments.}
\label{fig:trapped_states_both}
\mathrm{e}nd{figure}
The trapping effect of A-type states is particularly well illustrated in vertex-to-loops and loops-to-loops transport regimes, in which the C2-type state is not sr-trapped. Necessary orthonormalization of the A-type sr-trapped states can make all of them overlapping with the initial state. Thus, if we extend the tube the ATP is pushed down slightly by new A-type trapped states. Nevertheless, the magnitude of this effect drops exponentially and ATP becomes constant as seen in Fig. \ref{fig:perc_avg_sloops_ione} and Fig. \ref{fig:transport_decrease}.
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/perc_avg_sloops_ione.png}
\caption{The average ATP for the PCQW in the vertex-to-loops transport regime for different chiralities and lengths of the tube.}
\label{fig:perc_avg_sloops_ione}
\mathrm{e}nd{figure}
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/perc_avg_sloops_iloops.png}
\caption{The average ATP for the PCQW in the loops-to-loops transport regime for different chiralities and lengths of the tube.}
\label{fig:transport_decrease}
\mathrm{e}nd{figure}
On the other hand, in vertex-to-vertex and loops-to-vertex transport regimes the C2-type state is sr-trapped and we may observe that the ATP actually increases with extension of the tube, see Fig. \ref{fig:perc_avg_sone_ione} and Fig. \ref{fig:transport_increase}. Surprisingly, the walker is more likely to traverse a longer tube than a shorter one. The explanation is surprisingly simple too. An extension of the tube stretches the C2-type state, increases its number of vector elements and due to normalization its overlap with the initial subspace decreases. The effect was already reported in CQWs and PCQWs on much simpler geometry of the ladder graph \cite{transport_effects}. Here we show that the effect can be observed for the physically relevant, however more complex, structures of carbon nanotubes. Please note that in Fig. \ref{fig:perc_avg_sone_ione} and Fig. \ref{fig:transport_increase} we depict the averaged ATP. In Fig. \ref{fig:strong_trapping} we present the same effect, here with a significantly higher increase of ATP, for numerically obtained initial states maximizing ATP for each depicted setting. Moreover, Fig. \ref{fig:strong_trapping} demonstrates also another interesting effect similar to the so called strong trapping effect \cite{Kollar2015}. Indeed, the presence of the C2-type trapped state in the loops-to-vertex transport regime excludes existence of an initial state exhibiting the complete transport. Yet, as the C2-type trapped state recedes, the ATP approaches one for the maximal-transport states.
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/perc_avg_sone_ione.png}
\caption{The average ATP for PCQW in the vertex-to-vertex transport regime for different chiralities and lengths of the tube.}
\label{fig:perc_avg_sone_ione}
\mathrm{e}nd{figure}
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/perc_avg_sone_iloops.png}
\caption{The average ATP for PCQW in the loops-to-vertex transport regime for different chiralities and lengths of the tube.}
\label{fig:transport_increase}
\mathrm{e}nd{figure}
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/strong_trapping.png}
\caption{The ATP for PCQW in the loops-to-vertex transport regimes numerically maximized for each setting of length and chirality.}
\label{fig:strong_trapping}
\mathrm{e}nd{figure}
In the following we focus on geometric parameters of tubes, the chirality and the diameter, whose impact on transport properties of tubes has not been discussed yet. From all figures \ref{fig:perc_avg_sloops_ione}, \ref{fig:transport_decrease}, \ref{fig:perc_avg_sone_ione} and \ref{fig:transport_increase} one can see a clear quantitative splitting of the average ATP for tubes with chiralities $(n,0)$ and $(n,n)$.
Its explanation arises from the manner in which the chirality entirely controls the structure of C1-type trapped states and weakly also the structure of A-type trapped states. In particular, all the C1-type states in $(n,0)$ tubes have six alternating elements 1,-1 (see Fig. \ref{fig:trapped_states_both}) and therefore are normalized by $\frac{1}{\sqrt{6}}$ whereas half of C1-type states in $(n,n)$ tubes have four and half of them has eight alternating elements 1,-1 properly normalized. Based on that we can roughly estimate the difference between averaged ATP of both chiralities. As an example we provide a reliable estimate for the loops-to-loops transport regime, for which the loops initial subspace has no overlap with A-type trapped states. Indeed, while for chirality $(2n,0)$ the averaged ATP can be estimated as $q_1 \approx 1-2n \left((1/(2n) (1/6 + 1/6)\right) = 2/3$, for chirality $(n,n)$ we obtain estimation $q_2\approx 1-2n \left((1/(2n) (1/4 + 1/8)\right) = 5/8$. In words, all $2n$ loops contribute with the same overlap between the corresponding component of the maximally mixed state and two normalized C1-type trapped states attached to one loop.
From figures \ref{fig:perc_avg_sloops_ione}, \ref{fig:transport_decrease}, \ref{fig:perc_avg_sone_ione}, and \ref{fig:transport_increase}, it is also apparent that the average ATP slightly differs for different diameters of the tube and better transport is surprisingly achieved for thinner tubes. The effect is present in all studied transport regimes. It is due to the dominant feature that more trapped states are added with increasing diameter, which results in a higher trapping probability. However, this effect diminishes for large diameters, as these new trapped states have gradually very low overlaps with loops and vertex initial subspaces.
Let us now turn attention to (non-percolated) CQWs. We again stress that any trapped state of PCQW is a trapped state of the corresponding CQW. However, CQWs can have, in general, some additional trapped states. We briefly discuss their structure. First, for every A-type state in Fig. \ref{fig:trapped_states_both} (a) there is one more A'-type trapped state for the non-percolated walk corresponding to eigenvalue +1, whose vector elements corresponding to the same undirected edge have both values +1 and -1 as shown in Fig. \ref{fig:trapped_states_both} (b). Second, for chiralities $(2n,0)$ there are also 4 additional trapped states for each eigenvalue $\lambda = (1-i\sqrt{8})/3$ and $\overline{\lambda}$. Due to their degeneracy, we can choose their orthogonal basis in such a way that only the trapped state depicted in Fig. \ref{fig:trapped_iloops_sloops} and its conjugated state have nonzero overlap with the loop initial subspace. We denote both of them as bottom trapped states. Third, there are also additional trapped states which appear only for particular combinations of length and chirality of the tube. Their properties are not well understood yet. However, based on an extensive numerical analysis we conjecture that these trapped states have always non-zero overlap with the loop sink. It appears that in vertex-to-loops and loops-to-loops regimes these states are not sr-trapped and do not modify the ATP.
\begin{figure}
\centering
\includegraphics[width=190 pt]{img/trapped_iloops_sloops.png}
\caption{An additional bottom trapped state in non-percolated CQW on $(2n,0)$ tubes corresponding to the eigenvalue $\lambda = (1-i\sqrt{8})/3$, where $x=-2+i\sqrt{8}$ and $y=1+i\sqrt{8}$ illustrated on an unwrapped tube. The state is localised on the bottom ring of vertices.}
\label{fig:trapped_iloops_sloops}
\mathrm{e}nd{figure}
The presence of these additional trapped states further modifies transport properties we have found for PCQWs. First of all, ATP of the CQW is never higher then the ATP of the corresponding PCQW, which is known as environment assisted quantum transport \cite{Rebentrost2009,assisted_transport}. Moreover, in \citep{Kendon2016} authors investigated numerically how long it takes in CQW on a similar nanotube structure, till the walker, initiated in the equal superposition of base states from the vertex subspace, is fully transported to a sink. Our knowledge of trapped states for PCQWs shows that this is the only initial state orthogonal to all these trapped states. Thus, all other choices of the initial state result in nonzero trapping probability in PCQWs and thus also in CQWs.
In our case, all these additional trapped states are orthogonal to trapped states of PCQWs. Thus their contributions to the trapping probability is simply additive. Hence, the loops-to-loops transport regime is the least affected. The only contributing trapped states to the averaged ATP are the two bottom states for chiralities $(2n,0)$. Comparison of Fig. \ref{fig:unpercolated_loops_loops} with Fig. \ref{fig:transport_decrease} demonstrates clearly that the averaged ATP of the CQW and PCQW is the same except for $(2n,0)$ chiralities. This difference between the averaged ATPs can be here evaluated exactly, analogously as in the previous case, using the explicit forms of bottom states. Their overlap with the maximally mixed state on the loops initial subspace is the same and altogether they form the difference in the averaged ATPs as $1/(2\cdot (2n))$, which is in perfect agreement with data from both figures \ref{fig:unpercolated_loops_loops} and \ref{fig:transport_decrease}.
\begin{figure}
\centering
\includegraphics[width=240 pt]{img/unperc_avg_sloops_iloops.png}
\caption{The average ATP for the CQW in the loops-to-loops transport regime for different chiralities and lengths of the tube.}
\label{fig:unpercolated_loops_loops}
\mathrm{e}nd{figure}
A very similar pattern can be observed in the averaged ATP for the CQW in the vertex-to-loops regime. However, as the vertex initial subspace has also overlap with A'-type trapped states, all the averaged ATPs are slightly shifted. On the other hand, we obtain a quite different picture for the CQW in the loops-to-vertex transport regime, see Fig. \ref{fig:unperc_sone_iloops}. Since the loops initial subspace is orthogonal to the A'-type states, the ATP for CQW and PCQW coincides for some lengths of the tube with chiralities $(2n+1,0)$. Sudden drops of the averaged ATP are caused by additional trapped states which appear only for some lengths and chiralities. As expected, for longer tubes the influence of these states on ATP decreases. In contrast, the averaged ATP for CQW on the tube with chirality $(4,0)$ does not exhibit oscillations. Due to the overlap of the loops initial subspace with bottom states the ATP is reduced, compared to the PCQW, regardless of the length of the tube.
\begin{figure}
\centering
\includegraphics[width=260 pt]{img/unperc_sone_iloops.png}
\caption{(color online) The average ATP for both PCQW (circles) and CQW (squares) with one-vertex type sink and loops type initial state for $(3,0)$ (blue, solid line) and $(4,0)$ (orange, dashed line) tubes. Since we only present $(n,0)$ tubes, the length is measured in the number of basic length segments for clarity.}
\label{fig:unperc_sone_iloops}
\mathrm{e}nd{figure}
As can be seen from figures \ref{fig:unpercolated_loops_loops} and \ref{fig:unperc_sone_iloops}, for non-percolated CQWs the systematic effects of increasing $n$ and length are usually severely disrupted by the presence of additional trapped states. These mostly have significant impact on the ATP, if they are present for the particular choice of the structure parameters. However we can still observe some common features of ATP behavior related to the chirality. First, the highest averaged ATP is in all transport regimes obtained for the thinnest tube $(3,0)$. Second, due to the absence of any additional trapped state, CQWs and PCQWs on tubes with armchair chirality $(n,n)$ have the same ATP in loops-to-loops and vertex-to-loops transport regimes.
In conclusion, we have explored transport properties of Grover coined quantum walks (CQWs) and Grover percolated coined quantum walks (PCQWs) on carbon nanotubes with the armchair $(n,n)$ and the zig-zag $(n,0)$ chirality. Using a general theory for trapped states of PCQWs on planar graphs, we have constructed a convenient basis allowing to uncover how individual geometric characteristics of nanotubes and different types of a sink affect the set of trapped states. Based on this we analyse the asymptotic transport probability (ATP) in dependence of the tube length and chirality for different types of source-to-sink transport regimes. With the analytical insight into the relation between geometric properties of nanotubes and their trapped states, we have found several interesting transport effects. In particular, it is shown for all the studied chiralities that the longer tubes can be surprisingly more efficient for the excitation transport then the shorter ones. In addition, the chirality of the tube is responsible for a quantitative splitting of the average ATP into two main branches, where tubes with the chirality $(n,0)$ show better transport then tubes with the chirality $(n,n)$. We have shown that without a significant effort it is possible to provide a reliable estimate for the averaged ATP on tubes with different chiralities, based solely on a few trapped states having the highest overlap with the initial state. The diameter of tubes further generates a gentle separation for quantitative behavior of the averaged ATP. Quite generally, the thinner tubes exhibit better transport.
A numerical analysis performed for (non-peroclated) CQWs has revealed how a partly known structure of additional trapped states of CQWs further modifies a behavior of the ATP in comparison with PCQWs. It remains true that one can achieve better transport by increasing the length of the tube but for some chiralities this behavior is accompanied with gradually diminishing oscillations. Due to additional trapped states, which appear in CQWs only for some chiralities, the behavior of the averaged ATP splits into more quantitative branches, however, the thinnest tube exhibits the best transport. For a special transport regime we analytically calculated the difference between the averaged ATPs of PCQWs and CQWs.
Finally, let us point out, that we have investigated all phenomena mostly in terms of the averaged ATP. This allows us to make statements about the dominant behavior of the ATP irrespective of the walker's initial state. On the other hand, it also means that for special choices of the initial state these effects are significantly stronger.
{\it Acknowledgements} JM, JN and IJ acknowledge the financial support
from the Czech Science foundation (GA\v CR) project number 16-09824S, M\v{S}MT No. 8J18DE006, RVO14000, Grant Agency of the Czech Technical University in Prague grant No. SGS19/186/OHK4/3T/14, ``Centre for Advanced Applied Sciences'', Registry No. CZ.02.1.01/0.0/0.0/16\_019/0000778, supported by the Operational Programme Research, Development and Education, co-financed by the European Structural and Investment Funds and the state budget of the Czech Republic. IJ is partially supported from GA\v{C}R 17-00844S.
\begin{thebibliography}{99}
\bibitem{review} S. E. Venegas-Andraca, Quantum Information Processing vol. 11(5), pp. 1015-1106 (2012)
\bibitem{Stefanak2008}
M. \v Stefa\v n\'ak, I. Jex, and T. Kiss, Phys. Rev. Lett. {\bf 100}(2), 020501 (2008).
\bibitem{Werner2013}
F. A. Grunbaum, L. Vel\'azquez, A. H. Werner, and R. F. Werner, Commun. Math. Phys. {\bf 320}, 543 (2013).
\bibitem{Tanner2009}
B. Hein and G. Tanner, Phys. Rev. Lett. {\bf 103}, 260501 (2009).
\bibitem{Skoupy2016}
M. \v Stefa\v n\'ak and S. Skoup\'y, Phys. Rev. A {\bf 94}, 022301 (2016).
\bibitem{Aharonov2001}
D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani, in {\mathrm{e}m Proc. of the 33th ACM Symposium on The Theory of Computation, 2001}, (ACM New York, NY, USA, 2001) p. 50.
\bibitem{Magniez}
F. Magniez, A. Nayak, P. Richter, and M. Santha, .Algorithmica {\bf 63}(1), 91 (2012).
\bibitem{Asboth2012}
J. K. Asb\'oth, Phys. Rev. B {\bf 86}, 195414 (2012).
\bibitem{Moore2002}
Ch. Moore and A. Russell, in {\mathrm{e}m Proc. of the 6th Intl. Workshop on Randomization and Approximation Techniques in
Computer Science, 2002}, edited by J. D.P. Rolim and S. Vadhan, ( Cambridge, MA, USA, 2002) p. 164.
\bibitem{Portugal2008}
F. L. Marquezino, R. Portugal, G. Abal, and R. Donangelo, Phys. Rev. A \textbf{77}, 042312 (2008).
\bibitem{Segawa2009}
K. Chisaki, M. Hamada, N. Konno, and E. Segawa, Interdisciplinary Information Sciences {\bf 15}, 423 (2009).
\bibitem{Lyu2015}
Ch. Lyu, L. Yu, and S. Wu, Phys. Rev. A {\bf 92}, 052305 (2015).
\bibitem{Kendon2016}
H. Bougroura, H. Aissaoui, N. Chancellor, and V. Kendon, Phys. Rev A {\bf 94}, 062331 (2016).
\bibitem{Segava2013}
N. Konno, N. Obata, and E. Segawa, Commun. Math. Phys. {\bf 322}, 667 (2013).
\bibitem{Lara2013}
P. C. S. Lara, R. Portugal, and S. Boettcher, International Journal of Quantum Information {\bf 11}, 1350069 (2013).
\bibitem{Inui2004}
N. Inui, Y. Konishi, and N. Konno, Phys. Rev. A {\bf 69}, 052323 (2004).
\bibitem{inui:psa}
N. Inui and N. Konno, Physica A \textbf{353} 133 (2005).
\bibitem{inui:grover1}
N. Inui, N. Konno and E. Segawa, Phys. Rev. E \textbf{72} 056112 (2005).
\bibitem{miyazaki}
T. Miyazaki, M. Katori, and N. Konno, Phys. Rev. A {\bf 76} 012332 (2007).
\bibitem{watabe}
K. Watabe, N. Kobayashi, M. Katori and N. Konno, Phys. Rev. A {\bf 77} 062331 (2008).
\bibitem{falkner}
S. Falkner and S. Boettcher, Phys. Rev. A {\bf 90} 012307 (2014).
\bibitem{machida}
T. Machida, Quantum Inf. Comput. {\bf 15} 406 (2015).
\bibitem{Fleischhauer2000}
M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. {\bf 84}, 5094 (2000).
\bibitem{Poltl2009}
Ch. P\"{o}ltl, C. Emary, and T. Brandes, Phys. Rev. B {\bf 80}, 115313, (2009).
\bibitem{Creatore2013}
C. Creatore, M. A. Parker, S. Emmott, and A. W. Chin, Phys. Rev. Lett. {\bf 111}, 253601 (2013).
\bibitem{Mendoza2013}
J. J. Mendoza-Arenas, T. Grujic, D. Jaksch, and S. R. Clark, S. R. (2013), Phys. Rev. B {\bf 87}(23), 235130 (2013).
\bibitem{Rebentrost2009} P. Rebentrost, M. Mohseni, I. Kassal, S. Lloyd and A. Aspuru-Guzik, New J. Phys. \textbf{11}, 033003 (2009).
\bibitem{Chin2010} A.W. Chin, A. Datta, F. Caruso, S.F. Huelga, and M.B.
Plenio, New J. Phys. {\bf 12}, 065002 (2010).
\bibitem{Caruso2009}
F. Caruso, A. W. Chin, A. Datta, S. F. Huelga, and M. B. Plenio, J. Chem. Phys. {\bf 131}, 105106 (2009).
\bibitem{asymptotic1} B. Koll\'ar, J. Novotn\'y, and I. Jex, Phys. Rev. Lett. {\bf 108}, 230505 (2012).
\bibitem{asymptotic2} B. Koll\'ar, J. Novotn\'y, T. Kiss, and I. Jex, New J. Phys. {\bf 16}, 023002 (2014).
\bibitem{assisted_transport} M. \v Stefa\v n\'ak, J. Novotn\'y, and I. Jex, New J. Phys. \textbf{18}, 023040 (2016).
\bibitem{theory} J. Mare\v s, J. Novotn\'y, and I. Jex, Phys. Rev. A {\bf 99}, 042129 (2019).
\bibitem{transport_effects}
J. Mare\v s, J. Novotn\'y, M. \v Stefa\v n\'ak, and I. Jex, submitted to Phys. Rev. A (2019).
\bibitem{tubegen} TubeGen 3.4 (web-interface, http://turin.nss.udel.edu/research/tubegenonline.html), J. T. Frey and D. J. Doren, University of Delaware, Newark DE, 2011.
\bibitem{Kollar2015}
B. Kollár, T. Kiss, and I. Jex, Phys. Rev. A {\bf 91}, 022308 (2015).
\mathrm{e}nd{thebibliography}
\mathrm{e}nd{document} | math |
Casting Nets Ministries and Benedictine College presents the "Transform Your World Evangelization Training Camp," this once in a lifetime opportunity for high school students to be trained by some of the nation’s top evangelists in their field. This premier youth camp will train these dynamic youth how to be effective evangelists and disciples that will be able to bring the Gospel to all they meet. The participants will be trained how to help others develop an authentic prayer life, to dialogue about important religious truths and moral issues, to inspire their peers to authentic loving relationships in a chaste lifestyle, and to promote the dignity of human life from conception to natural death. All this while learning the fundamentals of sound evangelization. We are "Training the New Evangelists for the New Evangelization" To find out more, click on the button below. | english |
\begin{document}
\title{f Converse Theorems, Functoriality, skip -2mm
and Applications to Number Theory }
\thispagestyle{first} \setcounter{page}{119}
\begin{abstract}
\vskip 3mm
There has been a recent coming together of the Converse Theorem
for $\mbox{\upshape GL}n$ and the Langlands-Shahidi method of controlling the
analytic properties of automorphic $L$-functions which has allowed
us to establish a number of new cases of functoriality, or the
lifting of automorphic forms. In this article we would like to
present the current state of the Converse Theorem and outline the
method one uses to apply the Converse Theorem to obtain liftings.
We will then turn to an exposition of the new liftings and some of
their applications.
\vskip 4.5mm
\noindent {\bf 2000 Mathematics Subject Classification:} 11F70,
22E55.
\noindent {\bf Keywords and Phrases:} Automorphic forms,
$L$-functions, Converse theorems, Functoriality.
\end{abstract}
\vskip 12mm
\section{Introduction} \label{section 1}\setzero
\vskip-5mm \hspace{5mm}
Converse Theorems traditionally have provided a way to characterize
Dirichlet series associated to modular forms in terms
of their analytic properties. Most familiar are the Converse Theorems of
Hecke and Weil. Hecke first proved that $L$-functions associated to
modular forms enjoyed ``nice'' analytic properties and then proved
``Conversely'' that these analytic properties in fact
characterized modular $L$-functions. Weil extended this Converse
Theorem to $L$-functions of modular forms with level.
In their modern formulation, Converse Theorems are stated in terms of
automorphic
representations of $\mbox{\upshape GL}na$ instead of modular
forms. Jacquet, Piatetski-Shapiro, and Shalika have proved that the
$L$-functions associated to automorphic representations of
$\mbox{\upshape GL}na$ have nice analytic properties via integral
representations similar to those of Hecke. The relevant ``nice''
properties are: analytic continuation,
boundedness in vertical strips, and functional equation.
Converse Theorems in this context
invert these integral representations. They give a criterion for an
irreducible admissible representation $\Pi$ of $\mbox{\upshape GL}na$ to be
automorphic and cuspidal in terms of the analytic properties
of Rankin-Selberg convolution
$L$-functions $L(s,\Pi\times\pi')$ of $\Pi$ twisted by cuspidal
representations $\pi'$ of $\mbox{\upshape GL}m({\mathbb A})$ of smaller rank
groups.
To use Converse Theorems for applications, proving that certain
objects are automorphic, one must be able to show that certain
$L$-functions are ``nice''. However, essentially the only way to show
that an $L$-function is nice is to have it associated to an
automorphic form. Hence the most natural applications of
Converse Theorems are to functoriality, or the lifting of automorphic
forms, to $\mbox{\upshape GL}n$. More
explicit number theoretic applications then come as consequences of
these liftings.
Recently there have been several applications of Converse Theorems to
establishing functorialities. These have been possible thanks to the
recent advances in the Langlands-Shahidi method of analysing the
analytic properties of general automorphic $L$-functions, due to
Shahidi and his collaborators \cite{S3}.
By combining our Converse Theorems with their control of the
analytic properties of $L$-functions many new examples of functorial
liftings to $\mbox{\upshape GL}n$ have been established. These are described in
Section 4 below. As one number theoretic consequence of these liftings
Kim and Shahidi have been able to establish the best general estimates over a
number field towards the Ramamujan-Selberg conjectures for $\mbox{\upshape GL}_2$,
which in turn have already had other applications.
\section{Converse Theorems for \boldmath$\mbox{\upshape GL}n$} \label{section 2} \setzero
\vskip-5mm \hspace{5mm}
Let $k$ be a global field, ${\mathbb A}$ its adele ring, and $\psi$ a fixed
non-trivial (continuous) additive character of ${\mathbb A}$ which is trivial
on $k$. We will take $n\geq 3$ to be an integer.
To state these Converse Theorems, we begin with an irreducible admissible
representation $\Pi$ of $\mbox{\upshape GL}_n({\mathbb A})$.
It has a decomposition
$\Pi=\otimes'\Pi_v$, where $\Pi_v$ is an irreducible admissible
representation of $\mbox{\upshape GL}_n(k_v)$. By the local theory of Jacquet,
Piatetski-Shapiro, and Shalika \cite{JPSS,JS} to each $\Pi_v$ is associated a
local $L$-function $L(s,\Pi_v)$ and a local $\varepsilon$-factor
$\varepsilon(s,\Pi_v,\psi_v)$. Hence formally we can form
$$
L(s,\Pi)=\prod L(s,\Pi_v) \quad\quad\text{ and }\quad\quad
\varepsilon(s,\Pi,\psi)=\prod \varepsilon(s,\Pi_v,\psi_v).
$$
We will always assume the following two things about $\Pi$:
\begin{enumerate}
\item[(1)] $L(s,\Pi)$ converges in some half plane $Re(s)>>0$,
\item[(2)] the central character $\omega_\Pi$ of $\Pi$ is automorphic, that
is, invariant under $k^\times$.
\end{enumerate}
Under these assumptions, $\varepsilon(s,\Pi,\psi)=\varepsilon(s,\Pi)$
is independent of our choice of $\psi$ \cite{CPS1}.
As in Weil's case, our Converse Theorems will involve twists but now
by cuspidal automorphic representations of $\mbox{\upshape GL}_m({\mathbb A})$ for certain
$m$. For convenience, let us set $\mathcal A(m)$ to be the set of automorphic
representations of $\mbox{\upshape GL}_m({\mathbb A})$, $\mathcal A_0(m)$ the set of (irreducible)
cuspidal automorphic representations of $\mbox{\upshape GL}_m({\mathbb A})$, and
$ \mathcal
T(m)=\bigcup_{d=1}^m \mathcal A_0(d)$. If $S$ is a finite set of places,
we will let $\mathcal T^S(m)$ denote the subset of representations
$\pi\in \mathcal T$ with local components $\pi_v$ unramified at all places
$v\in S$ and let $\mathcal T_S(m)$ denote those $\pi$ which are
unramified for all $v\notin S$.
Let $\pi'=\otimes'\pi'_v$ be a cuspidal
representation of $\mbox{\upshape GL}_m({\mathbb A})$ with $m<n$. Then again we can formally
define
$$
L(s,\Pi\times \pi')=\prod L(s,\Pi_v\times \pi'_v) \quad\quad\text{ and
}\quad\quad
\varepsilon(s,\Pi\times \pi')=\prod
\varepsilon(s,\Pi_v\times \pi'_v,\psi_v)
$$
since the local factors make sense whether $\Pi$ is automorphic
or not. A consequence of (1) and (2) above and the cuspidality of ${\pi'}$ is
that both $L(s,\Pi\times{\pi'})$ and
$L(s,\widetilde\Pi\times\widetilde{\pi'})$
converge absolutely for $Re(s)>>0$, where $\widetilde\Pi$ and
$\widetilde{\pi'}$ are the contragredient representations, and
that $\varepsilon(s,\Pi\times{\pi'})$ is independent of
the choice of $\psi$.
We say that $L(s,\Pi\times{\pi'})$ is {\it nice} if it satisfies the
same analytic properties it would if $\Pi$ were cuspidal, i.e.,
\begin{enumerate}
\item $L(s,\Pi\times{\pi'})$ and
$L(s,\widetilde\Pi\times\widetilde{\pi'})$ have
continuations to {\it entire} functions of $s$,
\item these entire continuations are {\it bounded in vertical strips} of
finite width,
\item they satisfy the standard {\it functional equation}
\[
L(s,\Pi\times{\pi'})=\varepsilon(s,\Pi\times{\pi'})
L(1-s,\widetilde\Pi\times\widetilde{\pi'}).
\]
\end{enumerate}
The basic converse theorem for $\mbox{\upshape GL}_n$ is the following.
{\bf Theorem 1.} \cite{CPS2} \it Let $\Pi$ be an irreducible admissible
representation of $\mbox{\upshape GL}_n({\mathbb A})$ as above. Let $S$ be
a finite set of finite places. Suppose that
$L(s,\Pi\times{\pi'})$ is nice for all ${\pi'}\in\mathcal T^S(n-2)$.
Then $\Pi$
is quasi-automorphic in the sense that there is an automorphic
representation $\Pi'$ such that $\Pi_v\simeq\Pi'_v$ for all $v\notin S$.
If $S$ is empty, then in fact $\Pi$ is a cuspidal automorphic
representation of $\mbox{\upshape GL}na$.\rm
It is this version of the Converse Theorem that has been used in
conjunction with the Langlands-Shahidi method of controlling
analytic properties of $L$-functions in the new examples of
functoriality explained below.
{\bf Theorem 2.} \cite{CPS1} \it Let $\Pi$ be an irreducible admissible
representation of $\mbox{\upshape GL}_n({\mathbb A})$ as above. Let $S$ be
a non-empty finite set of places, containing $S_\infty$, such that the
class number of the ring $\mathfrak o_S$ of $S$-integers is one. Suppose that
$L(s,\Pi\times{\pi'})$ is nice for all ${\pi'}\in\mathcal T_S(n-1)$. Then $\Pi$
is quasi-automorphic in the sense that there is an automorphic
representation $\Pi'$ such that $\Pi_v\simeq\Pi'_v$ for all $v\in S$
and all $v\notin S$ such that both $\Pi_v$ and $\Pi'_v$ are unramified.\rm
This version of the Converse
Theorem was specifically designed to investigate functoriality in the
cases where one controls the $L$-functions by means of integral
representations where it is expected to be more difficult to control twists.
The proof of Theorem 1 with $S$ empty and $n-2$ replaced by $n-1$
essentially follows the lead of Hecke, Weil, and
Jacquet-Langlands. It is based on the integral representations of
$L$-functions, Fourier expansions, Mellin inversion, and finally a
use of the weak form of Langlands spectral theory. For Theorems 1
and 2 where we have restricted our twists either by ramification
or rank we must impose certain local conditions to compensate for
our limited twists. For Theorem 1 are a finite number of local
conditions and for Theorem 2 an infinite number of local
conditions. We must then work around these by using results on
generation of congruence subgroups and either weak approximation
(Theorem 1) or strong approximation (Theorem 2).
As for our expectations of what form the Converse
Theorem may take in the future, we refer the reader to the last
section of \cite{CPS2}.
\section{Functoriality via the Converse Theorem} \label{section 3}
\setzero\vskip-5mm \hspace{5mm }
In order to apply these theorems, one must be able to control the
analytic properties of the $L$-function. However the only way we
have of controlling global $L$-functions is to associate them to
automorphic forms or representations. A minute's thought will then
convince one that the primary application of these results will be
to the lifting of automorphic representations from some group $\mbox{\upshape H}$
to $\mbox{\upshape GL}n$.
Suppose that $\mbox{\upshape H}$ is a reductive group over $k$. For simplicity of
exposition we
will assume throughout that $\mbox{\upshape H}$ is split and deal only with the
connected component of its $L$-group, which we will (by abuse of
notation) denote by ${^L\!\mbox{\upshape H}}$ \cite{B}.
Let $\pi=\otimes'\pi_v$ be a cuspidal automorphic
representation of $\mbox{\upshape H}$ and $\rho$ a complex representation of
${^L\!\mbox{\upshape H}}$. To this situation Langlands has associated an $L$-function
$L(s,\pi,\rho)$ \cite{B}. Let us assume that
$\rho$ maps ${^L\!\mbox{\upshape H}}$ to $\mbox{\upshape GL}n({\mathbb C})$. Then by Langlands' general
Principle of Functoriality to $\pi$ should be
associated an automorphic representation $\Pi$ of $\mbox{\upshape GL}na$
satisfying $L(s,\Pi)=L(s,\pi,\rho)$,
$\varepsilon(s,\Pi)=\varepsilon(s,\pi,\rho)$, with similar equalities locally
and for the twisted versions \cite{B}.
Using the Converse Theorem to
establish such liftings involves three steps:
construction of a candidate lift, verification that the twisted
$L$-functions are ``nice'', and application of the appropriate
Converse Theorem.
1. {\it Construction of a candidate lift}: We construct a
candidate lift $\Pi=\otimes'\Pi_v$ on $\mbox{\upshape GL}na$ place by place. We
can see what $\Pi_v$ should be at almost all places. Since we have
the arithmetic Langlands (or Hecke-Frobenius) parameterization of
representations $\pi_v$ of $\mbox{\upshape H}(k_v)$ for all archimedean places
and those non-archimedean places where the representations are
unramified \cite{B}, we can use these to associate to $\pi_v$ and
the map $\rho_v: ^L\!\mbox{\upshape H}_v\rightarrow {^L\!\mbox{\upshape H}}\rightarrow \mbox{\upshape GL}n({\mathbb C})$ a
representation $\Pi_v$ of $\mbox{\upshape GL}nkv$. This correspondence preserves
local $L$- and $\varepsilon$-factors
\[
L(s,\Pi_v)=L(s,\pi_v,\rho_v) \quad\quad
\text{and}\quad\quad \varepsilon(s,\Pi_v,\psi_v)=\varepsilon(s,\pi_v,\rho_v,\psi_v)
\]
along with the twisted versions.
If $\mbox{\upshape H}$ happens to be $\mbox{\upshape GL}m$ or a related group then
we in principle know how to associate the representation $\Pi_v$ at
all places now that the local Langlands conjecture has been solved for
$\mbox{\upshape GL}m$. For other
situations, we may not know what $\Pi_v$ should be at the ramified
places. We will return to this difficulty momentarily and show how one
can work around this with the use of a highly ramified
twist. But for now,
let us assume we can finesse this local problem and arrive at a
global representation $\Pi=\otimes'\Pi_v$ such that
\[
L(s,\Pi)=\prod L(s,\Pi_v)=\prod L(s,\pi_v,\rho_v)=L(s,\pi,\rho)
\]
and similarly $\varepsilon(s,\Pi)=\varepsilon(s,\pi,\rho)$
with similar equalities for the twisted versions.
$\Pi$ should then be the Langlands lifting
of $\pi$ to $\mbox{\upshape GL}na$ associated to $\rho$.
2. {\it Analytic properties of global $L$-functions}: For
simplicity of exposition, let us now assume that $\rho$ is simply
a standard embedding of $^L\!\mbox{\upshape H}$ into $\mbox{\upshape GL}n({\mathbb C})$, such as will be
the case if we consider $\mbox{\upshape H}$ to be a split classical group, so
that $L(s,\pi,\rho)=L(s,\pi)$ is the standard $L$-function of
$\pi$. We have our candidate $\Pi$ for the lift of $\pi$ to $\mbox{\upshape GL}n$
from above. To be able to assert that the $\Pi$ which we
constructed place by place is automorphic, we will apply a
Converse Theorem. To do so we must control the twisted
$L$-functions $L(s,\Pi\times\pi')=L(s,\pi\times\pi')$ for
$\pi'\in\mathcal T$ with an appropriate twisting set $\mathcal T$
from Theorem 1 or 2. In the examples presented below, we have used
Theorem 1 above and the analytic control of $L(s,\pi\times\pi')$
achieved by the so-called Langlands-Shahidi method of analyzing
the $L$-functions through the Fourier coefficients of Eisenstein
series \cite{S3}. Currently this requires us to take $k$ to be a
number field. The {\it functional equation}
$L(s,\pi\times\pi')=\varepsilon(s,\pi\times\pi')L(1-s,\tilde\pi\times\tilde\pi')$
has been proved in wide generality by Shahidi \cite{S1}. The {\it
boundedness in vertical strips} has been proved in close to the
same generality by Gelbart and Shahidi \cite{GS}. As for the
entire continuation of $L(s,\pi\times\pi')$, a moments thought
will tell you that one should not always expect a cuspidal
representation of $\mbox{\upshape H}({\mathbb A})$ to necessarily lift to a cuspidal
representation of $\mbox{\upshape GL}na$. Hence it is unreasonable to expect all
$L(s,\pi\times\pi')$ to be entire. We had previously understood
how to work around this difficulty from the point of view of
integral representations by again using a highly ramified twist.
Kim realized that one could also control the entirety of these
twisted $L$-functions in the context of the Langlands-Shahidi
method by using a highly ramified twist. We will return to this
below. Thus in a fairly general context one has that
$L(s,\pi\times\pi')$ is {\it entire} for $\pi'$ in a suitably
modified twisting set $\mathcal T'$.
3. {\it Application of the Converse Theorem}: Once we have that
$L(s,\pi\times\pi')$ is nice for a suitable twisting set $\mathcal
T'$ then from the equalities
\[
L(s,\Pi\times\pi')=L(s,\pi\times\pi') \quad\quad\text{and}\quad\quad
\varepsilon(s,\Pi\times\pi')=\varepsilon(s,\pi\times\pi')
\]
we see that the $L(s,\Pi\times\pi')$ are nice and then we can apply
our Converse Theorems to conclude that $\Pi$ is either cuspidal
automorphic or at least that there is an automorphic $\Pi'$ such that
$\Pi_v=\Pi'_v$ at almost all places. This then effects the (possibly
weak) automorphic lift of $\pi$ to $\Pi$ or $\Pi'$.
4. {\it Highly ramified twists}: As we have indicated above, there are
both local and global problems that can be finessed by an appropriate
use of a highly ramified twist.
This is based on the following simple observation.
{\bf Observation.} \it Let $\Pi$ be as in Theorem 1 or 2. Suppose
that $\eta$ is a fixed character of
$k^\times\backslash{\mathbb A}^\times$. Suppose that
$L(s,\Pi\times{\pi'})$ is nice for all ${\pi'}\in \mathcal T'=
\mathcal T\otimes\eta$,
where $\mathcal T$ is either of the twisting sets of Theorem 1 or
2. Then $\Pi$ is quasi-automorphic as in those theorems. \rm
The only thing to observe
is that if ${\pi'}\in \mathcal T$ then
$L(s,\Pi\times({\pi'}\otimes\eta))=L(s,(\Pi\otimes\eta)\times{\pi'})$ so
that applying the Converse Theorem for $\Pi$ with twisting set $\mathcal
T\otimes\eta $ is equivalent to applying the Converse Theorem for
$\Pi\otimes\eta$ with the twisting set $\mathcal T$. So, by either Theorem
1 or 2, whichever is appropriate, $\Pi\otimes\eta$ is
quasi-automorphic and hence $\Pi$ is as well.
If we now begin with $\pi$ automorphic on $\mbox{\upshape H}({\mathbb A})$, we will take $T$
to be the set of finite places where $\pi_v$ is ramified. For applying
Theorem 1 we want $S=T$ and for Theorem 2 we would want $S\cap
T=\emptyset$. We will now take $\eta$ to be highly ramified at all
places $v\in T$, so that at $v\in T$ our twisting representations are all
locally of the form (unramified principal series)$\otimes$(highly
ramified character).
In order to finesse the lack of knowledge of an appropriate local lift,
we need to know the following two local facts about the local
theory of $L$-functions for $\mbox{\upshape H}$.
{\bf Multiplicativity of \boldmath$\gamma$-factors.} \it If $\pi'_v=\mathop{Ind}(\pi'_{1,v}\otimes\pi'_{2,v})$, with
$\pi'_{i,v}$ and irreducible admissible representation of $\mbox{\upshape GL}_{r_i}(k_v)$, then we have
$\gamma(s,\pi_v\times\pi'_v,\psi_v)=\gamma(s,\pi_v\times\pi'_{1,v},\psi_v) \gamma(s,\pi_v\times\pi'_{2,v},\psi_v).
$ \rm
{\bf Stability of \boldmath$\gamma$-factors.} \it
If $\pi_{1,v}$ and $\pi_{2,v}$ are two irreducible
admissible representations of $\mbox{\upshape H}(k_v)$ with the same central
character, then for every sufficiently
highly ramified character $\eta_v$ of $GL_1(k_v)$ we have
$\gamma(s,\pi_{1,v}\times\eta_v,\psi_v)=\gamma(s,\pi_{2,v}\times\eta_v,\psi_v).
$\rm
Both of these facts are known for
$\mbox{\upshape GL}n$, the multiplicativity being found in \cite{JPSS} and the stability
in \cite{JS'}. Multiplicativity in a fairly wide generality useful for
applications has been established by Shahidi \cite{S4}. Stability is
in a more primitive state at the moment, but Shahidi has begun to
establish the necessary results in a general context in \cite{S2}.
To utilize these local results, what one now does is the
following. At the places where $\pi_v$ is ramified, choose $\Pi_v$
to be arbitrary, except that it should have the same central
character as $\pi_v$. This is both to guarantee that the central
character of $\Pi$ is the same as that of $\pi$ and hence
automorphic and to guarantee that the stable forms of the
$\gamma$-factors for $\pi_v$ and $\Pi_v$ agree. Now form
$\Pi=\otimes'\Pi_v$. Choose our character $\eta$ so that at the
places $v\in T$ we have that the $L$- and $\gamma$-factors for
both $\pi_v\otimes\eta_v$ and $\Pi_v\otimes\eta_v$ are in their
stable form and agree. We then twist by $\mathcal T'=\mathcal
T\otimes \eta$ for this {\it fixed} character $\eta$. If
$\pi'\in\mathcal T'$, then for $v\in T$, $\pi'_v$ is of the form
$\pi'_v=\mathop{Ind}(|\ |^{s_1}\otimes\cdots\otimes |\
|^{s_m})\otimes\eta_v$. So at the places $v\in T$, applying both
multilplcativity and stability, we have
\begin{align*}
\gamma(s,\pi_v\times\pi'_v,\psi_v)
&=\prod \gamma(s+s_i,\pi_v\otimes\eta_v,\psi_v)\\
&=\prod \gamma(s+s_i,\Pi_v\otimes\eta_v,\psi_v)
=\gamma(s,\Pi_v\times\pi'_v,\psi_v)
\end{align*}
from which one deduces a similar equality for the $L$- and
$\varepsilon$-factors. From this it will then follow that globally
we will have $L(s,\pi\times\pi')=L(s,\Pi\times\pi')$ for all
$\pi'\in\mathcal T'$ with similar equalities for the
$\varepsilon$-factors. This then completes Step 1.
To complete our use of the highly ramified twist, we must return
to the question of whether $L(s,\pi\times\pi')$ can be made
entire. In analysing $L$-functions via the Langlands-Shahidi
method, the poles of the $L$-function are controlled by those of
an Eisenstein series. In general, the inducing data for the
Eisenstein series must satisfy a type of self-contragredience for
there to be poles. The important observation of Kim is that one
can use a highly ramified twist to destroy this
self-contragredience at one place, which suffices, and hence
eliminate poles. The precise condition will depend on the
individual construction. A more detailed explanation of this can
be found in Shahidi's article \cite{S3}. This completes Step 2
above.
\section{New examples of functoriality} \label{section 4}
\setzero\vskip-5mm \hspace{5mm }
Now take $k$ to be a number field. There has been
much progress recently in utilizing the method described above to
establish global liftings from split groups $\mbox{\upshape H}$ over $k$ to an
appropriate $\mbox{\upshape GL}n$. Among them are the following.
1. {\it Classical groups}. Take $\mbox{\upshape H}$ to be a split classical group over
$k$, more specifically, the split form of either $\mbox{\upshape SO}_{2n+1}$,
$\mbox{\upshape Sp}_{2n}$, or $\mbox{\upshape SO}_{2n}$. The the $L$-groups ${^L\!\mbox{\upshape H}}$
are then $\mbox{\upshape Sp}_{2n}({\mathbb C})$, $\mbox{\upshape SO}_{2n+1}({\mathbb C})$, or $\mbox{\upshape SO}_{2n}({\mathbb C})$ and
there are natural embeddings into the general linear
group $\mbox{\upshape GL}_{2n}({\mathbb C})$, $\mbox{\upshape GL}_{2n+1}({\mathbb C})$, or $\mbox{\upshape GL}_{2n}({\mathbb C})$
respectively.
Associated to each there should be a
lifting of admissible or automorphic representations from
$\mbox{\upshape H}({\mathbb A})$ to the appropriate $\mbox{\upshape GL}_N({\mathbb A})$. The first lifting that
resulted from the combination of the Converse Theorem and the
Langlands-Shahidi method of controlling automorphic $L$-functions was
the weak lift for generic cuspidal representations
from $\mbox{\upshape SO}_{2n+1}$ to $\mbox{\upshape GL}_{2n}$ over a number field $k$ obtained with
Kim and Shahidi \cite{CKPSS}.
We can now extend this to the following result.
{\bf Theorem.} \cite{CKPSS, CKPSS2} \it Let $\mbox{\upshape H}$ be a
split classical group over $k$ as above and $\pi$ a globally generic
cuspidal representation of $\mbox{\upshape H}({\mathbb A})$. Then there exists an automorphic
representation $\Pi$ of $\mbox{\upshape GL}_N({\mathbb A})$ for the appropriate $N$ such that
$\Pi_v$ is the local Langlands lift of $\pi_v$ for all archimedean
places $v$ and almost all non-archimedean places $v$ where $\pi_v$ is
unramified. \rm
In these examples the local Langlands correspondence is not
understood at the places $v$ where $\pi_v$ is ramified and so we
must use the technique of multiplicativity and stability of the
local $\gamma$-factors as outlined in Section 3. Multiplicativity
has been established in generality by Shahidi \cite{S4} and in our
first paper \cite{CKPSS} we relied on the stability of
$\gamma$-factors for $\mbox{\upshape SO}_{2n+1}$ from \cite{CPS3}. Recently
Shahidi has established an expression for his local coefficients
as Mellin transforms of Bessel functions in some generality, and
in particular in the cases at hand one can combine this with the
results of \cite{CPS3} to obtain the necessary stability in the
other cases, leading to the extension of the lifting to the other
split classical groups \cite{CKPSS2}.
2. {\it Tensor products}. Let $H=\mbox{\upshape GL}_m\times \mbox{\upshape GL}_n$.
Then ${^L\!\mbox{\upshape H}}=\mbox{\upshape GL}_m({\mathbb C})\times\mbox{\upshape GL}n({\mathbb C})$. Then there is a natural
simple tensor product map from $\mbox{\upshape GL}_m({\mathbb C})\times
\mbox{\upshape GL}_n({\mathbb C})$ to $\mbox{\upshape GL}_{mn}({\mathbb C})$. The associated functoriality from
$\mbox{\upshape GL}_n\times\mbox{\upshape GL}_m$ to $\mbox{\upshape GL}_{mn}$ is the {\it tensor product
lifting}. Now the associated local lifting
is understood in principle since the local Langlands
conjecture for $\mbox{\upshape GL}_n$ has been solved.
The question of global functoriality has been
recently solved in the cases of $\mbox{\upshape GL}_2\times \mbox{\upshape GL}_2$ to $ \mbox{\upshape GL}_4$
by Ramakrishnan \cite{R} and $\mbox{\upshape GL}_2 \times \mbox{\upshape GL}_3$ to $\mbox{\upshape GL}_6$ by Kim and
Shahidi \cite{KS1, KS2}.
{\bf Theorem.} \cite{R, KS1} \it Let $\pi_1$ be a cuspidal
representation of $\mbox{\upshape GL}_2({\mathbb A})$
and $\pi_2$ a cuspidal representation of $\mbox{\upshape GL}_2({\mathbb A})$ (respectively
$\mbox{\upshape GL}_3({\mathbb A})$). Then there is an automorphic representation $\Pi$ of
$\mbox{\upshape GL}_4({\mathbb A})$ (respectively $\mbox{\upshape GL}_6({\mathbb A})$) such that $\Pi_v$ is the
local tensor product lift of $\pi_{1,v}\times\pi_{2,v}$ at all places
$v$.\rm
In both cases the authors are able to characterize when the lift is cuspidal.
In the case of Ramakrishnan \cite{R}
$\pi=\pi_1\times\pi_2$ with each $\pi_i$ cuspidal representation
of $\mbox{\upshape GL}_2({\mathbb A})$ and $\Pi$ is to be an automorphic representation of
$\mbox{\upshape GL}_4({\mathbb A})$. To apply the Converse Theorem
Ramakrishnan needs to control the analytic properties of
$L(s,\Pi\times\pi')$ for $\pi'$ cuspidal representations of
$\mbox{\upshape GL}_1({\mathbb A})$ and $\mbox{\upshape GL}_2({\mathbb A})$, that is, the Rankin triple product
$L$-functions $L(s,\Pi\times\pi')=L(s,\pi_1\times\pi_2\times\pi')$.
This he was able to do using a combination of results on the integral
representation for this $L$-function due to
Garrett, Rallis and Piatetski-Shapiro, and Ikeda
and the work of Shahidi on the
Langlands-Shahidi method.
In the case of Kim and Shahidi \cite{KS1, KS2}
$\pi_2$ is a cuspidal representation of $\mbox{\upshape GL}_3({\mathbb A})$.
Since the lifted representation $\Pi$ is to be an
automorphic representation of $\mbox{\upshape GL}_6({\mathbb A})$, to apply the Converse
Theorem they must control the
analytic properties of $L(s,\Pi\times\pi')=L(s,\pi_1\times\pi_2\times\pi')$
where now $\pi'$ must run over appropriate cuspidal representations of
$\mbox{\upshape GL}_m({\mathbb A})$ with $m=1,2,3,4$. The control of these triple products is
an application of the Langlands-Shahidi method of analysing
$L$-functions and involves coefficients of Eisenstein series on $\mbox{\upshape GL}_5$,
$\mbox{\upshape Sp}in_{10}$, and simply connected $\mbox{\upshape E}_6$ and $\mbox{\upshape E}_7$ \cite{KS1,S3}. We should
note that even though the complete local lifting theory is understood,
they still use a highly ramified twist to control the global
properties of the $L$-functions involved. They then show that their
lifting is correct at all local places by using a base
change argument.
3. {\it Symmetric powers}. Now take $H=\mbox{\upshape GL}_2$, so ${^L\!\mbox{\upshape H}}=\mbox{\upshape GL}_2({\mathbb C})$.
For each $n\geq 1$ there is the
natural symmetric $n$-th power map $sym^n: \mbox{\upshape GL}_2({\mathbb C})\rightarrow
\mbox{\upshape GL}_{n+1}({\mathbb C})$. The associated functoriality is the
{\it symmetric power lifting} from representations of $\mbox{\upshape GL}_2$ to
representations of $\mbox{\upshape GL}_{n+1}$. Once again
the local symmetric powers liftings are understood in principle
thanks to the solution of the local Langlands conjecture for
$\mbox{\upshape GL}_n$. The global symmetric square lifting,
so $\mbox{\upshape GL}_2$ to $\mbox{\upshape GL}_3$, is an old
theorem of Gelbart and Jacquet. Recently, Kim and Shahidi have
shown the existence of the global symmetric cube lifting from $\mbox{\upshape GL}_2$ to
$\mbox{\upshape GL}_4$ \cite{KS1} and then Kim followed with the global symmetric
fourth power lifting from $\mbox{\upshape GL}_2$ to $\mbox{\upshape GL}_5$ \cite{K}.
{\bf Theorem.} \cite{KS1,K} \it Let $\pi$ be a cuspidal
automorphic representation
of $\mbox{\upshape GL}_2({\mathbb A})$. Then there exists an automorphic representation $\Pi$
of $\mbox{\upshape GL}_4({\mathbb A})$ (resp. $\mbox{\upshape GL}_5({\mathbb A})$) such that $\Pi_v$ is the
local symmetric cube (resp. symmetric fourth power) lifting of
$\pi_v$. \rm
In either case, Kim and Shahidi have been able to give a very
interesting characterization of when the image is in fact cuspidal
\cite{KS1,KS2}.
The original symmetric square lifting of
Gelbart and Jacquet indeed used the converse theorem for $\mbox{\upshape GL}_3$.
For Kim and Shahidi, the symmetric cube
was deduced from the functorial $\mbox{\upshape GL}_2\times \mbox{\upshape GL}_3$ tensor product lift above
\cite{KS1, KS2} and did not require a new use of the Converse Theorem.
For the symmetric fourth power lift, Kim first used the Converse
Theorem to establish the {\it exterior square} lift from $\mbox{\upshape GL}_4$ to $\mbox{\upshape GL}_6$
by the method outlined above and then combined this with the symmetric
cube lift to deduce the symmetric fourth power lift \cite{K}.
\section{Applications} \label{section 5}
\setzero\vskip-5mm \hspace{5mm }
These new examples of functoriality have already had many applications.
We will discuss the primary applications in parallel with
our presentation of the examples. $k$ remains a number field.
{\it 1. Classical groups}: The applications so far of the lifting
from classical groups to $\mbox{\upshape GL}n$ have been ``internal'' to the
theory of automorphic forms. In the case of the lifting from
$\mbox{\upshape SO}_{2n+1}$ to $\mbox{\upshape GL}_{2n}$, once the weak lift is established,
then the theory of Ginzburg, Rallis, and Soudry \cite{GRS} allows
one to show that this weak lift is indeed a strong lift in the
sense that the local components $\Pi_v$ at those $v\in S$ are
completely determined and to completely characterize the image
locally and globally. This will be true for the liftings from the
other classical groups as well. Once one knows that these lifts
are rigid, then one can begin to define and analyse the local
lift for ramified representations by setting the lift of $\pi_v$
to be the $\Pi_v$ determined by the global lift. This is the
content of the papers of Jiang and Soudry \cite{JS1,JS2} for the
case of $\mbox{\upshape H}=SO_{2n+1}$. In essence they show that this local lift
satisfies the relations on $L$-functions that one expects from
functoriality and then deduce the {\it local Langlands conjecture
for $\mbox{\upshape SO}_{2n+1}$} from that for $\mbox{\upshape GL}_{2n}$. We refer to their
papers for more detail and precise statements.
{\it 2. Tensor product lifts}: Ramakrishnan's original motivation for
establishing the tensor product lifting from $\mbox{\upshape GL}_2\times\mbox{\upshape GL}_2$
to $\mbox{\upshape GL}_4$ was to prove the multiplicity one conjecture for $\mbox{\upshape
SL}_2$ of Langlands and Labesse.
{\bf Theorem.} \cite{R} \it In the spectral decomposition
\[
L^2_{cusp}(\mbox{\upshape SL}_2(k)\backslash \mbox{\upshape
SL}_2(\mathbb A))=\bigoplus \ m_\pi \pi
\]
into irreducible cuspidal representations, the multiplicities $m_\pi$
are at most one.\rm
This was previously known to be true for $\mbox{\upshape GL}_n$ and false for
$\mbox{\upshape SL}_n$ for $n\geq 3$. For further applications, for
example to the Tate conjecture, see \cite{R}.
The primary application of the tensor product lifting from
$\mbox{\upshape GL}_2\times\mbox{\upshape GL}_3$ to $\mbox{\upshape GL}_6$ of Kim and Shahidi was in the
establishment of the symmetric cube lifting and through this the
symmetric fourth power lifting, so
the applications of the symmetric power
liftings outlined below are applications of
this lifting as well.
{\it 3. Symmetric powers}: It was early observed that the existence of
the symmetric power liftings of $\mbox{\upshape GL}_2$ to $\mbox{\upshape GL}_{n+1}$
for all $n$ would imply the
Ramanujan-Petersson and Selberg conjectures for modular forms. Every
time a symmetric power lift is obtained we obtain better bounds
towards Ramanujan. The result which follows from the symmetric third
and fourth power lifts of Kim and Shahidi is the following.
{\bf Theorem.} \cite{KS2} \it Let $\pi$ be a cuspidal
representation of $\mbox{\upshape GL}_2({\mathbb A})$ such that the symmetric cube lift
of $\pi$ is again cuspidal. Let $\mathop{diag}(\alpha_v,\beta_v)$ be the
Satake parameter for an unramified local component. Then
$|\alpha_v|, |\beta_v|<q_v^{1/9}$. If in addition the fourth
symmetric power lift is not cuspidal, the full Ramanujan
conjecture is valid. \rm
The corresponding statement at infinite places, i.e., the analogue of the
Selberg conjecture on the eigenvalues of Mass forms, is also valid \cite{K}.
Estimates towards Ramanujan are a staple of improving any analytic
number theoretic estimates obtained through spectral methods. Both
the $1/9$ non-archimedean and $1/9$ archimedean estimate towards
Ramanujan above were applied in obtaining the precise form of the
exponent in our recent result with Sarnak breaking the convexity
bound for twisted
Hilbert modular $L$-series in the conductor aspect, which in turn
was the key ingredient in our work on Hilbert's eleventh
problem for ternary quadratic forms. Similar in spirit
are the applications by Kim and Shahidi to the
hyperbolic circle problem and to estimates on sums of
shifted Fourier coefficients \cite{KS1}.
In addition Kim and Shahidi were able to obtain results towards the
Sato-Tate conjecture.
{\bf Theorem.}~\cite{KS2} \it Let $\pi$ be a cuspidal representation of $\mbox{\upshape GL}_2({\mathbb A})$ with trivial central
character. Let $\mathop{diag}(\alpha_v,\beta_v)$ be the Satake parameter for an unramified local component and let
$a_v=\alpha_v+\beta_v$. Assuming $\pi$ satisfies the Ramanujan conjecture, there are sets $T^\pm$ of positive
lower density for which $a_v>2\cos(2\pi/11)-\epsilon$ for all $v\in T^+$ and $a_v<-2\cos(2\pi/11)+\epsilon$ for
all $v\in T^-$. [Note: $2\cos(2\pi/11)=1.68...$]\rm
Kim and Shahidi have other conditional applications of their
liftings such as the conditional existence of Siegel
modular cusp forms of weight
$3$ (assuming Arthur's multiplicity formula for $\mbox{\upshape Sp}_4$). We refer the
reader to \cite{KS1} for details on these applications and others.
\end{document} | math |
غلام نبی جوہر اوس زامُت 26 جون 1934 مَنٛز۔ تم چھٕ ال پل ۶۰ کتاب لیچھمژہ اردو ، کأسر تہٕ انگریزی زبانہِ مَنٛز لیچھمژہ۔ واریا انوانن پؠٹھ لیچھمژہ۔ 1971 مَنٛز لیچھہ ام اَکھ ناول مجرم یۄس گودنچ نول اوس کشمیری مَنٛز۔ امس آی واریا انعام تہِ دنہِ کشیر زبانہِ موجوب۔ تمہِ کور واریا تحقیٖق واریاہن انوانن کسیر ہُنٛد توأریٖخس تہٕ سوفی وٕلین مطلق .
مرنک وجہ
جوہر اوس بوکوٹ کھراب گمژ تہِ 19 جون 2018 مود سو کارل پورس مَنٛز.
زاتی زندگی
سو اوس زامُت ژارٕ شریفس مَنٛز۔ تمہِ کعر گوڈنچ تعلیم پننس گامس مَنٛز حاصل تہِ تمہٕ پتہِ گو سوُ اوتر پردیس تہِ کرن قونونس مَنٛز الیگر ینورسٹی مَنٛز حصل. | kashmiri |
Lived for thirty two years in New Zealand. Very photogenic, but as a result of lifestyle - initially – and a serious lack of availability of supplies for the serious photograph amateur – only one dingy little shop in down town Auckland – not much happened on that front. It was only in the latter of that period that I got back into swing again.
On this page some pictures at random from that period. | english |
package edu.cmu.cs.glacier.tests;
public class Arrays {
int [] intArray;
public Arrays() {
}
public int[] getData() {
return intArray;
}
public byte[] getByteData() {
return new byte[0];
}
public void setData() {
intArray[0] = 42;
}
}
| code |
/*
* Deadline Scheduling Class (SCHED_DEADLINE)
*
* Earliest Deadline First (EDF) + Constant Bandwidth Server (CBS).
*
* Tasks that periodically executes their instances for less than their
* runtime won't miss any of their deadlines.
* Tasks that are not periodic or sporadic or that tries to execute more
* than their reserved bandwidth will be slowed down (and may potentially
* miss some of their deadlines), and won't affect any other task.
*
* Copyright (C) 2012 Dario Faggioli <raistlin@linux.it>,
* Juri Lelli <juri.lelli@gmail.com>,
* Michael Trimarchi <michael@amarulasolutions.com>,
* Fabio Checconi <fchecconi@gmail.com>
*/
#include "sched.h"
#include <linux/slab.h>
struct dl_bandwidth def_dl_bandwidth;
static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se)
{
return container_of(dl_se, struct task_struct, dl);
}
static inline struct rq *rq_of_dl_rq(struct dl_rq *dl_rq)
{
return container_of(dl_rq, struct rq, dl);
}
static inline struct dl_rq *dl_rq_of_se(struct sched_dl_entity *dl_se)
{
struct task_struct *p = dl_task_of(dl_se);
struct rq *rq = task_rq(p);
return &rq->dl;
}
static inline int on_dl_rq(struct sched_dl_entity *dl_se)
{
return !RB_EMPTY_NODE(&dl_se->rb_node);
}
static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq)
{
struct sched_dl_entity *dl_se = &p->dl;
return dl_rq->rb_leftmost == &dl_se->rb_node;
}
void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
{
raw_spin_lock_init(&dl_b->dl_runtime_lock);
dl_b->dl_period = period;
dl_b->dl_runtime = runtime;
}
void init_dl_bw(struct dl_bw *dl_b)
{
raw_spin_lock_init(&dl_b->lock);
raw_spin_lock(&def_dl_bandwidth.dl_runtime_lock);
if (global_rt_runtime() == RUNTIME_INF)
dl_b->bw = -1;
else
dl_b->bw = to_ratio(global_rt_period(), global_rt_runtime());
raw_spin_unlock(&def_dl_bandwidth.dl_runtime_lock);
dl_b->total_bw = 0;
}
void init_dl_rq(struct dl_rq *dl_rq, struct rq *rq)
{
dl_rq->rb_root = RB_ROOT;
#ifdef CONFIG_SMP
/* zero means no -deadline tasks */
dl_rq->earliest_dl.curr = dl_rq->earliest_dl.next = 0;
dl_rq->dl_nr_migratory = 0;
dl_rq->overloaded = 0;
dl_rq->pushable_dl_tasks_root = RB_ROOT;
#else
init_dl_bw(&dl_rq->dl_bw);
#endif
}
#ifdef CONFIG_SMP
static inline int dl_overloaded(struct rq *rq)
{
return atomic_read(&rq->rd->dlo_count);
}
static inline void dl_set_overload(struct rq *rq)
{
if (!rq->online)
return;
cpumask_set_cpu(rq->cpu, rq->rd->dlo_mask);
/*
* Must be visible before the overload count is
* set (as in sched_rt.c).
*
* Matched by the barrier in pull_dl_task().
*/
smp_wmb();
atomic_inc(&rq->rd->dlo_count);
}
static inline void dl_clear_overload(struct rq *rq)
{
if (!rq->online)
return;
atomic_dec(&rq->rd->dlo_count);
cpumask_clear_cpu(rq->cpu, rq->rd->dlo_mask);
}
static void update_dl_migration(struct dl_rq *dl_rq)
{
if (dl_rq->dl_nr_migratory && dl_rq->dl_nr_running > 1) {
if (!dl_rq->overloaded) {
dl_set_overload(rq_of_dl_rq(dl_rq));
dl_rq->overloaded = 1;
}
} else if (dl_rq->overloaded) {
dl_clear_overload(rq_of_dl_rq(dl_rq));
dl_rq->overloaded = 0;
}
}
static void inc_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
struct task_struct *p = dl_task_of(dl_se);
if (p->nr_cpus_allowed > 1)
dl_rq->dl_nr_migratory++;
update_dl_migration(dl_rq);
}
static void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
struct task_struct *p = dl_task_of(dl_se);
if (p->nr_cpus_allowed > 1)
dl_rq->dl_nr_migratory--;
update_dl_migration(dl_rq);
}
/*
* The list of pushable -deadline task is not a plist, like in
* sched_rt.c, it is an rb-tree with tasks ordered by deadline.
*/
static void enqueue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
struct dl_rq *dl_rq = &rq->dl;
struct rb_node **link = &dl_rq->pushable_dl_tasks_root.rb_node;
struct rb_node *parent = NULL;
struct task_struct *entry;
int leftmost = 1;
BUG_ON(!RB_EMPTY_NODE(&p->pushable_dl_tasks));
while (*link) {
parent = *link;
entry = rb_entry(parent, struct task_struct,
pushable_dl_tasks);
if (dl_entity_preempt(&p->dl, &entry->dl))
link = &parent->rb_left;
else {
link = &parent->rb_right;
leftmost = 0;
}
}
if (leftmost)
dl_rq->pushable_dl_tasks_leftmost = &p->pushable_dl_tasks;
rb_link_node(&p->pushable_dl_tasks, parent, link);
rb_insert_color(&p->pushable_dl_tasks, &dl_rq->pushable_dl_tasks_root);
}
static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
struct dl_rq *dl_rq = &rq->dl;
if (RB_EMPTY_NODE(&p->pushable_dl_tasks))
return;
if (dl_rq->pushable_dl_tasks_leftmost == &p->pushable_dl_tasks) {
struct rb_node *next_node;
next_node = rb_next(&p->pushable_dl_tasks);
dl_rq->pushable_dl_tasks_leftmost = next_node;
}
rb_erase(&p->pushable_dl_tasks, &dl_rq->pushable_dl_tasks_root);
RB_CLEAR_NODE(&p->pushable_dl_tasks);
}
static inline int has_pushable_dl_tasks(struct rq *rq)
{
return !RB_EMPTY_ROOT(&rq->dl.pushable_dl_tasks_root);
}
static int push_dl_task(struct rq *rq);
static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
{
return dl_task(prev);
}
static inline void set_post_schedule(struct rq *rq)
{
rq->post_schedule = has_pushable_dl_tasks(rq);
}
#else
static inline
void enqueue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
}
static inline
void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
}
static inline
void inc_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
}
static inline
void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
}
static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
{
return false;
}
static inline int pull_dl_task(struct rq *rq)
{
return 0;
}
static inline void set_post_schedule(struct rq *rq)
{
}
#endif /* CONFIG_SMP */
static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags);
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags);
/*
* We are being explicitly informed that a new instance is starting,
* and this means that:
* - the absolute deadline of the entity has to be placed at
* current time + relative deadline;
* - the runtime of the entity has to be set to the maximum value.
*
* The capability of specifying such event is useful whenever a -deadline
* entity wants to (try to!) synchronize its behaviour with the scheduler's
* one, and to (try to!) reconcile itself with its own scheduling
* parameters.
*/
static inline void setup_new_dl_entity(struct sched_dl_entity *dl_se,
struct sched_dl_entity *pi_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);
WARN_ON(!dl_se->dl_new || dl_se->dl_throttled);
/*
* We use the regular wall clock time to set deadlines in the
* future; in fact, we must consider execution overheads (time
* spent on hardirq context, etc.).
*/
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
dl_se->dl_new = 0;
}
/*
* Pure Earliest Deadline First (EDF) scheduling does not deal with the
* possibility of a entity lasting more than what it declared, and thus
* exhausting its runtime.
*
* Here we are interested in making runtime overrun possible, but we do
* not want a entity which is misbehaving to affect the scheduling of all
* other entities.
* Therefore, a budgeting strategy called Constant Bandwidth Server (CBS)
* is used, in order to confine each entity within its own bandwidth.
*
* This function deals exactly with that, and ensures that when the runtime
* of a entity is replenished, its deadline is also postponed. That ensures
* the overrunning entity can't interfere with other entity in the system and
* can't make them miss their deadlines. Reasons why this kind of overruns
* could happen are, typically, a entity voluntarily trying to overcome its
* runtime, or it just underestimated it during sched_setattr().
*/
static void replenish_dl_entity(struct sched_dl_entity *dl_se,
struct sched_dl_entity *pi_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);
BUG_ON(pi_se->dl_runtime <= 0);
/*
* This could be the case for a !-dl task that is boosted.
* Just go with full inherited parameters.
*/
if (dl_se->dl_deadline == 0) {
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}
/*
* We keep moving the deadline away until we get some
* available runtime for the entity. This ensures correct
* handling of situations where the runtime overrun is
* arbitrary large.
*/
while (dl_se->runtime <= 0) {
dl_se->deadline += pi_se->dl_period;
dl_se->runtime += pi_se->dl_runtime;
}
/*
* At this point, the deadline really should be "in
* the future" with respect to rq->clock. If it's
* not, we are, for some reason, lagging too much!
* Anyway, after having warn userspace abut that,
* we still try to keep the things running by
* resetting the deadline and the budget of the
* entity.
*/
if (dl_time_before(dl_se->deadline, rq_clock(rq))) {
printk_deferred_once("sched: DL replenish lagged to much\n");
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}
}
/*
* Here we check if --at time t-- an entity (which is probably being
* [re]activated or, in general, enqueued) can use its remaining runtime
* and its current deadline _without_ exceeding the bandwidth it is
* assigned (function returns true if it can't). We are in fact applying
* one of the CBS rules: when a task wakes up, if the residual runtime
* over residual deadline fits within the allocated bandwidth, then we
* can keep the current (absolute) deadline and residual budget without
* disrupting the schedulability of the system. Otherwise, we should
* refill the runtime and set the deadline a period in the future,
* because keeping the current (absolute) deadline of the task would
* result in breaking guarantees promised to other tasks (refer to
* Documentation/scheduler/sched-deadline.txt for more informations).
*
* This function returns true if:
*
* runtime / (deadline - t) > dl_runtime / dl_period ,
*
* IOW we can't recycle current parameters.
*
* Notice that the bandwidth check is done against the period. For
* task with deadline equal to period this is the same of using
* dl_deadline instead of dl_period in the equation above.
*/
static bool dl_entity_overflow(struct sched_dl_entity *dl_se,
struct sched_dl_entity *pi_se, u64 t)
{
u64 left, right;
/*
* left and right are the two sides of the equation above,
* after a bit of shuffling to use multiplications instead
* of divisions.
*
* Note that none of the time values involved in the two
* multiplications are absolute: dl_deadline and dl_runtime
* are the relative deadline and the maximum runtime of each
* instance, runtime is the runtime left for the last instance
* and (deadline - t), since t is rq->clock, is the time left
* to the (absolute) deadline. Even if overflowing the u64 type
* is very unlikely to occur in both cases, here we scale down
* as we want to avoid that risk at all. Scaling down by 10
* means that we reduce granularity to 1us. We are fine with it,
* since this is only a true/false check and, anyway, thinking
* of anything below microseconds resolution is actually fiction
* (but still we want to give the user that illusion >;).
*/
left = (pi_se->dl_period >> DL_SCALE) * (dl_se->runtime >> DL_SCALE);
right = ((dl_se->deadline - t) >> DL_SCALE) *
(pi_se->dl_runtime >> DL_SCALE);
return dl_time_before(right, left);
}
/*
* When a -deadline entity is queued back on the runqueue, its runtime and
* deadline might need updating.
*
* The policy here is that we update the deadline of the entity only if:
* - the current deadline is in the past,
* - using the remaining runtime with the current deadline would make
* the entity exceed its bandwidth.
*/
static void update_dl_entity(struct sched_dl_entity *dl_se,
struct sched_dl_entity *pi_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);
/*
* The arrival of a new instance needs special treatment, i.e.,
* the actual scheduling parameters have to be "renewed".
*/
if (dl_se->dl_new) {
setup_new_dl_entity(dl_se, pi_se);
return;
}
if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}
}
/*
* If the entity depleted all its runtime, and if we want it to sleep
* while waiting for some new execution time to become available, we
* set the bandwidth enforcement timer to the replenishment instant
* and try to activate it.
*
* Notice that it is important for the caller to know if the timer
* actually started or not (i.e., the replenishment instant is in
* the future or in the past).
*/
static int start_dl_timer(struct sched_dl_entity *dl_se, bool boosted)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);
ktime_t now, act;
ktime_t soft, hard;
unsigned long range;
s64 delta;
if (boosted)
return 0;
/*
* We want the timer to fire at the deadline, but considering
* that it is actually coming from rq->clock and not from
* hrtimer's time base reading.
*/
act = ns_to_ktime(dl_se->deadline);
now = hrtimer_cb_get_time(&dl_se->dl_timer);
delta = ktime_to_ns(now) - rq_clock(rq);
act = ktime_add_ns(act, delta);
/*
* If the expiry time already passed, e.g., because the value
* chosen as the deadline is too small, don't even try to
* start the timer in the past!
*/
if (ktime_us_delta(act, now) < 0)
return 0;
hrtimer_set_expires(&dl_se->dl_timer, act);
soft = hrtimer_get_softexpires(&dl_se->dl_timer);
hard = hrtimer_get_expires(&dl_se->dl_timer);
range = ktime_to_ns(ktime_sub(hard, soft));
__hrtimer_start_range_ns(&dl_se->dl_timer, soft,
range, HRTIMER_MODE_ABS, 0);
return hrtimer_active(&dl_se->dl_timer);
}
/*
* This is the bandwidth enforcement timer callback. If here, we know
* a task is not on its dl_rq, since the fact that the timer was running
* means the task is throttled and needs a runtime replenishment.
*
* However, what we actually do depends on the fact the task is active,
* (it is on its rq) or has been removed from there by a call to
* dequeue_task_dl(). In the former case we must issue the runtime
* replenishment and add the task back to the dl_rq; in the latter, we just
* do nothing but clearing dl_throttled, so that runtime and deadline
* updating (and the queueing back to dl_rq) will be done by the
* next call to enqueue_task_dl().
*/
static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
{
struct sched_dl_entity *dl_se = container_of(timer,
struct sched_dl_entity,
dl_timer);
struct task_struct *p = dl_task_of(dl_se);
struct rq *rq;
again:
rq = task_rq(p);
raw_spin_lock(&rq->lock);
if (rq != task_rq(p)) {
/* Task was moved, retrying. */
raw_spin_unlock(&rq->lock);
goto again;
}
/*
* We need to take care of several possible races here:
*
* - the task might have changed its scheduling policy
* to something different than SCHED_DEADLINE
* - the task might have changed its reservation parameters
* (through sched_setattr())
* - the task might have been boosted by someone else and
* might be in the boosting/deboosting path
*
* In all this cases we bail out, as the task is already
* in the runqueue or is going to be enqueued back anyway.
*/
if (!dl_task(p) || dl_se->dl_new ||
dl_se->dl_boosted || !dl_se->dl_throttled)
goto unlock;
sched_clock_tick();
update_rq_clock(rq);
dl_se->dl_throttled = 0;
dl_se->dl_yielded = 0;
if (task_on_rq_queued(p)) {
enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
#ifdef CONFIG_SMP
/*
* Queueing this task back might have overloaded rq,
* check if we need to kick someone away.
*/
if (has_pushable_dl_tasks(rq))
push_dl_task(rq);
#endif
}
unlock:
raw_spin_unlock(&rq->lock);
return HRTIMER_NORESTART;
}
void init_dl_task_timer(struct sched_dl_entity *dl_se)
{
struct hrtimer *timer = &dl_se->dl_timer;
if (hrtimer_active(timer)) {
hrtimer_try_to_cancel(timer);
return;
}
hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
timer->function = dl_task_timer;
}
static
int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
{
return (dl_se->runtime <= 0);
}
extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
/*
* Update the current task's runtime statistics (provided it is still
* a -deadline task and has not been removed from the dl_rq).
*/
static void update_curr_dl(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct sched_dl_entity *dl_se = &curr->dl;
u64 delta_exec;
if (!dl_task(curr) || !on_dl_rq(dl_se))
return;
/*
* Consumed budget is computed considering the time as
* observed by schedulable tasks (excluding time spent
* in hardirq context, etc.). Deadlines are instead
* computed using hard walltime. This seems to be the more
* natural solution, but the full ramifications of this
* approach need further study.
*/
delta_exec = rq_clock_task(rq) - curr->se.exec_start;
if (unlikely((s64)delta_exec <= 0))
return;
schedstat_set(curr->se.statistics.exec_max,
max(curr->se.statistics.exec_max, delta_exec));
curr->se.sum_exec_runtime += delta_exec;
account_group_exec_runtime(curr, delta_exec);
curr->se.exec_start = rq_clock_task(rq);
cpuacct_charge(curr, delta_exec);
sched_rt_avg_update(rq, delta_exec);
dl_se->runtime -= delta_exec;
if (dl_runtime_exceeded(rq, dl_se)) {
__dequeue_task_dl(rq, curr, 0);
if (likely(start_dl_timer(dl_se, curr->dl.dl_boosted)))
dl_se->dl_throttled = 1;
else
enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH);
if (!is_leftmost(curr, &rq->dl))
resched_curr(rq);
}
/*
* Because -- for now -- we share the rt bandwidth, we need to
* account our runtime there too, otherwise actual rt tasks
* would be able to exceed the shared quota.
*
* Account to the root rt group for now.
*
* The solution we're working towards is having the RT groups scheduled
* using deadline servers -- however there's a few nasties to figure
* out before that can happen.
*/
if (rt_bandwidth_enabled()) {
struct rt_rq *rt_rq = &rq->rt;
raw_spin_lock(&rt_rq->rt_runtime_lock);
/*
* We'll let actual RT tasks worry about the overflow here, we
* have our own CBS to keep us inline; only account when RT
* bandwidth is relevant.
*/
if (sched_rt_bandwidth_account(rt_rq))
rt_rq->rt_time += delta_exec;
raw_spin_unlock(&rt_rq->rt_runtime_lock);
}
}
#ifdef CONFIG_SMP
static struct task_struct *pick_next_earliest_dl_task(struct rq *rq, int cpu);
static inline u64 next_deadline(struct rq *rq)
{
struct task_struct *next = pick_next_earliest_dl_task(rq, rq->cpu);
if (next && dl_prio(next->prio))
return next->dl.deadline;
else
return 0;
}
static void inc_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
{
struct rq *rq = rq_of_dl_rq(dl_rq);
if (dl_rq->earliest_dl.curr == 0 ||
dl_time_before(deadline, dl_rq->earliest_dl.curr)) {
/*
* If the dl_rq had no -deadline tasks, or if the new task
* has shorter deadline than the current one on dl_rq, we
* know that the previous earliest becomes our next earliest,
* as the new task becomes the earliest itself.
*/
dl_rq->earliest_dl.next = dl_rq->earliest_dl.curr;
dl_rq->earliest_dl.curr = deadline;
cpudl_set(&rq->rd->cpudl, rq->cpu, deadline, 1);
} else if (dl_rq->earliest_dl.next == 0 ||
dl_time_before(deadline, dl_rq->earliest_dl.next)) {
/*
* On the other hand, if the new -deadline task has a
* a later deadline than the earliest one on dl_rq, but
* it is earlier than the next (if any), we must
* recompute the next-earliest.
*/
dl_rq->earliest_dl.next = next_deadline(rq);
}
}
static void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
{
struct rq *rq = rq_of_dl_rq(dl_rq);
/*
* Since we may have removed our earliest (and/or next earliest)
* task we must recompute them.
*/
if (!dl_rq->dl_nr_running) {
dl_rq->earliest_dl.curr = 0;
dl_rq->earliest_dl.next = 0;
cpudl_set(&rq->rd->cpudl, rq->cpu, 0, 0);
} else {
struct rb_node *leftmost = dl_rq->rb_leftmost;
struct sched_dl_entity *entry;
entry = rb_entry(leftmost, struct sched_dl_entity, rb_node);
dl_rq->earliest_dl.curr = entry->deadline;
dl_rq->earliest_dl.next = next_deadline(rq);
cpudl_set(&rq->rd->cpudl, rq->cpu, entry->deadline, 1);
}
}
#else
static inline void inc_dl_deadline(struct dl_rq *dl_rq, u64 deadline) {}
static inline void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline) {}
#endif /* CONFIG_SMP */
#ifdef CONFIG_SCHED_HMP
static void
inc_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p)
{
inc_cumulative_runnable_avg(&rq->hmp_stats, p);
}
static void
dec_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p)
{
dec_cumulative_runnable_avg(&rq->hmp_stats, p);
}
#ifdef CONFIG_SCHED_QHMP
static void
fixup_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p,
u32 new_task_load)
{
fixup_cumulative_runnable_avg(&rq->hmp_stats, p, new_task_load);
}
#else
static void
fixup_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p,
u32 new_task_load)
{
s64 task_load_delta = (s64)new_task_load - task_load(p);
fixup_cumulative_runnable_avg(&rq->hmp_stats, p, task_load_delta);
}
#endif
#else /* CONFIG_SCHED_HMP */
static inline void
inc_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p) { }
static inline void
dec_hmp_sched_stats_dl(struct rq *rq, struct task_struct *p) { }
#endif /* CONFIG_SCHED_HMP */
static inline
void inc_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
int prio = dl_task_of(dl_se)->prio;
u64 deadline = dl_se->deadline;
WARN_ON(!dl_prio(prio));
dl_rq->dl_nr_running++;
add_nr_running(rq_of_dl_rq(dl_rq), 1);
inc_hmp_sched_stats_dl(rq_of_dl_rq(dl_rq), dl_task_of(dl_se));
inc_dl_deadline(dl_rq, deadline);
inc_dl_migration(dl_se, dl_rq);
}
static inline
void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
int prio = dl_task_of(dl_se)->prio;
WARN_ON(!dl_prio(prio));
WARN_ON(!dl_rq->dl_nr_running);
dl_rq->dl_nr_running--;
sub_nr_running(rq_of_dl_rq(dl_rq), 1);
dec_hmp_sched_stats_dl(rq_of_dl_rq(dl_rq), dl_task_of(dl_se));
dec_dl_deadline(dl_rq, dl_se->deadline);
dec_dl_migration(dl_se, dl_rq);
}
static void __enqueue_dl_entity(struct sched_dl_entity *dl_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rb_node **link = &dl_rq->rb_root.rb_node;
struct rb_node *parent = NULL;
struct sched_dl_entity *entry;
int leftmost = 1;
BUG_ON(!RB_EMPTY_NODE(&dl_se->rb_node));
while (*link) {
parent = *link;
entry = rb_entry(parent, struct sched_dl_entity, rb_node);
if (dl_time_before(dl_se->deadline, entry->deadline))
link = &parent->rb_left;
else {
link = &parent->rb_right;
leftmost = 0;
}
}
if (leftmost)
dl_rq->rb_leftmost = &dl_se->rb_node;
rb_link_node(&dl_se->rb_node, parent, link);
rb_insert_color(&dl_se->rb_node, &dl_rq->rb_root);
inc_dl_tasks(dl_se, dl_rq);
}
static void __dequeue_dl_entity(struct sched_dl_entity *dl_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
if (RB_EMPTY_NODE(&dl_se->rb_node))
return;
if (dl_rq->rb_leftmost == &dl_se->rb_node) {
struct rb_node *next_node;
next_node = rb_next(&dl_se->rb_node);
dl_rq->rb_leftmost = next_node;
}
rb_erase(&dl_se->rb_node, &dl_rq->rb_root);
RB_CLEAR_NODE(&dl_se->rb_node);
dec_dl_tasks(dl_se, dl_rq);
}
static void
enqueue_dl_entity(struct sched_dl_entity *dl_se,
struct sched_dl_entity *pi_se, int flags)
{
BUG_ON(on_dl_rq(dl_se));
/*
* If this is a wakeup or a new instance, the scheduling
* parameters of the task might need updating. Otherwise,
* we want a replenishment of its runtime.
*/
if (dl_se->dl_new || flags & ENQUEUE_WAKEUP)
update_dl_entity(dl_se, pi_se);
else if (flags & ENQUEUE_REPLENISH)
replenish_dl_entity(dl_se, pi_se);
__enqueue_dl_entity(dl_se);
}
static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
{
__dequeue_dl_entity(dl_se);
}
static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
{
struct task_struct *pi_task = rt_mutex_get_top_task(p);
struct sched_dl_entity *pi_se = &p->dl;
/*
* Use the scheduling parameters of the top pi-waiter
* task if we have one and its (relative) deadline is
* smaller than our one... OTW we keep our runtime and
* deadline.
*/
if (pi_task && p->dl.dl_boosted && dl_prio(pi_task->normal_prio)) {
pi_se = &pi_task->dl;
} else if (!dl_prio(p->normal_prio)) {
/*
* Special case in which we have a !SCHED_DEADLINE task
* that is going to be deboosted, but exceedes its
* runtime while doing so. No point in replenishing
* it, as it's going to return back to its original
* scheduling class after this.
*/
BUG_ON(!p->dl.dl_boosted || flags != ENQUEUE_REPLENISH);
return;
}
/*
* If p is throttled, we do nothing. In fact, if it exhausted
* its budget it needs a replenishment and, since it now is on
* its rq, the bandwidth timer callback (which clearly has not
* run yet) will take care of this.
*/
if (p->dl.dl_throttled)
return;
enqueue_dl_entity(&p->dl, pi_se, flags);
if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
enqueue_pushable_dl_task(rq, p);
}
static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags)
{
dequeue_dl_entity(&p->dl);
dequeue_pushable_dl_task(rq, p);
}
static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags)
{
update_curr_dl(rq);
__dequeue_task_dl(rq, p, flags);
}
/*
* Yield task semantic for -deadline tasks is:
*
* get off from the CPU until our next instance, with
* a new runtime. This is of little use now, since we
* don't have a bandwidth reclaiming mechanism. Anyway,
* bandwidth reclaiming is planned for the future, and
* yield_task_dl will indicate that some spare budget
* is available for other task instances to use it.
*/
static void yield_task_dl(struct rq *rq)
{
struct task_struct *p = rq->curr;
/*
* We make the task go to sleep until its current deadline by
* forcing its runtime to zero. This way, update_curr_dl() stops
* it and the bandwidth timer will wake it up and will give it
* new scheduling parameters (thanks to dl_yielded=1).
*/
if (p->dl.runtime > 0) {
rq->curr->dl.dl_yielded = 1;
p->dl.runtime = 0;
}
update_curr_dl(rq);
}
#ifdef CONFIG_SMP
static int find_later_rq(struct task_struct *task);
static int
select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
{
struct task_struct *curr;
struct rq *rq;
if (sd_flag != SD_BALANCE_WAKE && sd_flag != SD_BALANCE_FORK)
goto out;
rq = cpu_rq(cpu);
rcu_read_lock();
curr = ACCESS_ONCE(rq->curr); /* unlocked access */
/*
* If we are dealing with a -deadline task, we must
* decide where to wake it up.
* If it has a later deadline and the current task
* on this rq can't move (provided the waking task
* can!) we prefer to send it somewhere else. On the
* other hand, if it has a shorter deadline, we
* try to make it stay here, it might be important.
*/
if (unlikely(dl_task(curr)) &&
(curr->nr_cpus_allowed < 2 ||
!dl_entity_preempt(&p->dl, &curr->dl)) &&
(p->nr_cpus_allowed > 1)) {
int target = find_later_rq(p);
if (target != -1)
cpu = target;
}
rcu_read_unlock();
out:
return cpu;
}
static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
{
/*
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
*/
if (rq->curr->nr_cpus_allowed == 1 ||
cpudl_find(&rq->rd->cpudl, rq->curr, NULL) == -1)
return;
/*
* p is migratable, so let's not schedule it and
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
cpudl_find(&rq->rd->cpudl, p, NULL) != -1)
return;
resched_curr(rq);
}
static int pull_dl_task(struct rq *this_rq);
#endif /* CONFIG_SMP */
/*
* Only called when both the current and waking task are -deadline
* tasks.
*/
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags)
{
if (dl_entity_preempt(&p->dl, &rq->curr->dl)) {
resched_curr(rq);
return;
}
#ifdef CONFIG_SMP
/*
* In the unlikely case current and p have the same deadline
* let us try to decide what's the best thing to do...
*/
if ((p->dl.deadline == rq->curr->dl.deadline) &&
!test_tsk_need_resched(rq->curr))
check_preempt_equal_dl(rq, p);
#endif /* CONFIG_SMP */
}
#ifdef CONFIG_SCHED_HRTICK
static void start_hrtick_dl(struct rq *rq, struct task_struct *p)
{
hrtick_start(rq, p->dl.runtime);
}
#endif
static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
struct dl_rq *dl_rq)
{
struct rb_node *left = dl_rq->rb_leftmost;
if (!left)
return NULL;
return rb_entry(left, struct sched_dl_entity, rb_node);
}
struct task_struct *pick_next_task_dl(struct rq *rq, struct task_struct *prev)
{
struct sched_dl_entity *dl_se;
struct task_struct *p;
struct dl_rq *dl_rq;
dl_rq = &rq->dl;
if (need_pull_dl_task(rq, prev)) {
pull_dl_task(rq);
/*
* pull_rt_task() can drop (and re-acquire) rq->lock; this
* means a stop task can slip in, in which case we need to
* re-start task selection.
*/
if (rq->stop && task_on_rq_queued(rq->stop))
return RETRY_TASK;
}
/*
* When prev is DL, we may throttle it in put_prev_task().
* So, we update time before we check for dl_nr_running.
*/
if (prev->sched_class == &dl_sched_class)
update_curr_dl(rq);
if (unlikely(!dl_rq->dl_nr_running))
return NULL;
put_prev_task(rq, prev);
dl_se = pick_next_dl_entity(rq, dl_rq);
BUG_ON(!dl_se);
p = dl_task_of(dl_se);
p->se.exec_start = rq_clock_task(rq);
/* Running task will never be pushed. */
dequeue_pushable_dl_task(rq, p);
#ifdef CONFIG_SCHED_HRTICK
if (hrtick_enabled(rq))
start_hrtick_dl(rq, p);
#endif
set_post_schedule(rq);
return p;
}
static void put_prev_task_dl(struct rq *rq, struct task_struct *p)
{
update_curr_dl(rq);
if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1)
enqueue_pushable_dl_task(rq, p);
}
static void task_tick_dl(struct rq *rq, struct task_struct *p, int queued)
{
update_curr_dl(rq);
#ifdef CONFIG_SCHED_HRTICK
if (hrtick_enabled(rq) && queued && p->dl.runtime > 0)
start_hrtick_dl(rq, p);
#endif
}
static void task_fork_dl(struct task_struct *p)
{
/*
* SCHED_DEADLINE tasks cannot fork and this is achieved through
* sched_fork()
*/
}
static void task_dead_dl(struct task_struct *p)
{
struct hrtimer *timer = &p->dl.dl_timer;
struct dl_bw *dl_b = dl_bw_of(task_cpu(p));
/*
* Since we are TASK_DEAD we won't slip out of the domain!
*/
raw_spin_lock_irq(&dl_b->lock);
dl_b->total_bw -= p->dl.dl_bw;
raw_spin_unlock_irq(&dl_b->lock);
hrtimer_cancel(timer);
}
static void set_curr_task_dl(struct rq *rq)
{
struct task_struct *p = rq->curr;
p->se.exec_start = rq_clock_task(rq);
/* You can't push away the running task */
dequeue_pushable_dl_task(rq, p);
}
#ifdef CONFIG_SMP
/* Only try algorithms three times */
#define DL_MAX_TRIES 3
static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
{
if (!task_running(rq, p) &&
cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
return 1;
return 0;
}
/* Returns the second earliest -deadline task, NULL otherwise */
static struct task_struct *pick_next_earliest_dl_task(struct rq *rq, int cpu)
{
struct rb_node *next_node = rq->dl.rb_leftmost;
struct sched_dl_entity *dl_se;
struct task_struct *p = NULL;
next_node:
next_node = rb_next(next_node);
if (next_node) {
dl_se = rb_entry(next_node, struct sched_dl_entity, rb_node);
p = dl_task_of(dl_se);
if (pick_dl_task(rq, p, cpu))
return p;
goto next_node;
}
return NULL;
}
static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
static int find_later_rq(struct task_struct *task)
{
struct sched_domain *sd;
struct cpumask *later_mask = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
int this_cpu = smp_processor_id();
int best_cpu, cpu = task_cpu(task);
/* Make sure the mask is initialized first */
if (unlikely(!later_mask))
return -1;
if (task->nr_cpus_allowed == 1)
return -1;
/*
* We have to consider system topology and task affinity
* first, then we can look for a suitable cpu.
*/
cpumask_copy(later_mask, task_rq(task)->rd->span);
cpumask_and(later_mask, later_mask, cpu_active_mask);
cpumask_and(later_mask, later_mask, &task->cpus_allowed);
best_cpu = cpudl_find(&task_rq(task)->rd->cpudl,
task, later_mask);
if (best_cpu == -1)
return -1;
/*
* If we are here, some target has been found,
* the most suitable of which is cached in best_cpu.
* This is, among the runqueues where the current tasks
* have later deadlines than the task's one, the rq
* with the latest possible one.
*
* Now we check how well this matches with task's
* affinity and system topology.
*
* The last cpu where the task run is our first
* guess, since it is most likely cache-hot there.
*/
if (cpumask_test_cpu(cpu, later_mask))
return cpu;
/*
* Check if this_cpu is to be skipped (i.e., it is
* not in the mask) or not.
*/
if (!cpumask_test_cpu(this_cpu, later_mask))
this_cpu = -1;
rcu_read_lock();
for_each_domain(cpu, sd) {
if (sd->flags & SD_WAKE_AFFINE) {
/*
* If possible, preempting this_cpu is
* cheaper than migrating.
*/
if (this_cpu != -1 &&
cpumask_test_cpu(this_cpu, sched_domain_span(sd))) {
rcu_read_unlock();
return this_cpu;
}
/*
* Last chance: if best_cpu is valid and is
* in the mask, that becomes our choice.
*/
if (best_cpu < nr_cpu_ids &&
cpumask_test_cpu(best_cpu, sched_domain_span(sd))) {
rcu_read_unlock();
return best_cpu;
}
}
}
rcu_read_unlock();
/*
* At this point, all our guesses failed, we just return
* 'something', and let the caller sort the things out.
*/
if (this_cpu != -1)
return this_cpu;
cpu = cpumask_any(later_mask);
if (cpu < nr_cpu_ids)
return cpu;
return -1;
}
/* Locks the rq it finds */
static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
{
struct rq *later_rq = NULL;
int tries;
int cpu;
for (tries = 0; tries < DL_MAX_TRIES; tries++) {
cpu = find_later_rq(task);
if ((cpu == -1) || (cpu == rq->cpu))
break;
later_rq = cpu_rq(cpu);
/* Retry if something changed. */
if (double_lock_balance(rq, later_rq)) {
if (unlikely(task_rq(task) != rq ||
!cpumask_test_cpu(later_rq->cpu,
&task->cpus_allowed) ||
task_running(rq, task) ||
!task_on_rq_queued(task))) {
double_unlock_balance(rq, later_rq);
later_rq = NULL;
break;
}
}
/*
* If the rq we found has no -deadline task, or
* its earliest one has a later deadline than our
* task, the rq is a good one.
*/
if (!later_rq->dl.dl_nr_running ||
dl_time_before(task->dl.deadline,
later_rq->dl.earliest_dl.curr))
break;
/* Otherwise we try again. */
double_unlock_balance(rq, later_rq);
later_rq = NULL;
}
return later_rq;
}
static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
{
struct task_struct *p;
if (!has_pushable_dl_tasks(rq))
return NULL;
p = rb_entry(rq->dl.pushable_dl_tasks_leftmost,
struct task_struct, pushable_dl_tasks);
BUG_ON(rq->cpu != task_cpu(p));
BUG_ON(task_current(rq, p));
BUG_ON(p->nr_cpus_allowed <= 1);
BUG_ON(!task_on_rq_queued(p));
BUG_ON(!dl_task(p));
return p;
}
/*
* See if the non running -deadline tasks on this rq
* can be sent to some other CPU where they can preempt
* and start executing.
*/
static int push_dl_task(struct rq *rq)
{
struct task_struct *next_task;
struct rq *later_rq;
if (!rq->dl.overloaded)
return 0;
next_task = pick_next_pushable_dl_task(rq);
if (!next_task)
return 0;
retry:
if (unlikely(next_task == rq->curr)) {
WARN_ON(1);
return 0;
}
/*
* If next_task preempts rq->curr, and rq->curr
* can move away, it makes sense to just reschedule
* without going further in pushing next_task.
*/
if (dl_task(rq->curr) &&
dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) &&
rq->curr->nr_cpus_allowed > 1) {
resched_curr(rq);
return 0;
}
/* We might release rq lock */
get_task_struct(next_task);
/* Will lock the rq it'll find */
later_rq = find_lock_later_rq(next_task, rq);
if (!later_rq) {
struct task_struct *task;
/*
* We must check all this again, since
* find_lock_later_rq releases rq->lock and it is
* then possible that next_task has migrated.
*/
task = pick_next_pushable_dl_task(rq);
if (task_cpu(next_task) == rq->cpu && task == next_task) {
/*
* The task is still there. We don't try
* again, some other cpu will pull it when ready.
*/
dequeue_pushable_dl_task(rq, next_task);
goto out;
}
if (!task)
/* No more tasks */
goto out;
put_task_struct(next_task);
next_task = task;
goto retry;
}
deactivate_task(rq, next_task, 0);
next_task->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(next_task, later_rq->cpu);
next_task->on_rq = TASK_ON_RQ_QUEUED;
activate_task(later_rq, next_task, 0);
resched_curr(later_rq);
double_unlock_balance(rq, later_rq);
out:
put_task_struct(next_task);
return 1;
}
static void push_dl_tasks(struct rq *rq)
{
/* Terminates as it moves a -deadline task */
while (push_dl_task(rq))
;
}
static int pull_dl_task(struct rq *this_rq)
{
int this_cpu = this_rq->cpu, ret = 0, cpu;
struct task_struct *p;
struct rq *src_rq;
u64 dmin = LONG_MAX;
if (likely(!dl_overloaded(this_rq)))
return 0;
/*
* Match the barrier from dl_set_overloaded; this guarantees that if we
* see overloaded we must also see the dlo_mask bit.
*/
smp_rmb();
for_each_cpu(cpu, this_rq->rd->dlo_mask) {
if (this_cpu == cpu)
continue;
src_rq = cpu_rq(cpu);
/*
* It looks racy, abd it is! However, as in sched_rt.c,
* we are fine with this.
*/
if (this_rq->dl.dl_nr_running &&
dl_time_before(this_rq->dl.earliest_dl.curr,
src_rq->dl.earliest_dl.next))
continue;
/* Might drop this_rq->lock */
double_lock_balance(this_rq, src_rq);
/*
* If there are no more pullable tasks on the
* rq, we're done with it.
*/
if (src_rq->dl.dl_nr_running <= 1)
goto skip;
p = pick_next_earliest_dl_task(src_rq, this_cpu);
/*
* We found a task to be pulled if:
* - it preempts our current (if there's one),
* - it will preempt the last one we pulled (if any).
*/
if (p && dl_time_before(p->dl.deadline, dmin) &&
(!this_rq->dl.dl_nr_running ||
dl_time_before(p->dl.deadline,
this_rq->dl.earliest_dl.curr))) {
WARN_ON(p == src_rq->curr);
WARN_ON(!task_on_rq_queued(p));
/*
* Then we pull iff p has actually an earlier
* deadline than the current task of its runqueue.
*/
if (dl_time_before(p->dl.deadline,
src_rq->curr->dl.deadline))
goto skip;
ret = 1;
deactivate_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, this_cpu);
p->on_rq = TASK_ON_RQ_QUEUED;
activate_task(this_rq, p, 0);
dmin = p->dl.deadline;
/* Is there any other task even earlier? */
}
skip:
double_unlock_balance(this_rq, src_rq);
}
return ret;
}
static void post_schedule_dl(struct rq *rq)
{
push_dl_tasks(rq);
}
/*
* Since the task is not running and a reschedule is not going to happen
* anytime soon on its runqueue, we try pushing it away now.
*/
static void task_woken_dl(struct rq *rq, struct task_struct *p)
{
if (!task_running(rq, p) &&
!test_tsk_need_resched(rq->curr) &&
has_pushable_dl_tasks(rq) &&
p->nr_cpus_allowed > 1 &&
dl_task(rq->curr) &&
(rq->curr->nr_cpus_allowed < 2 ||
dl_entity_preempt(&rq->curr->dl, &p->dl))) {
push_dl_tasks(rq);
}
}
static void set_cpus_allowed_dl(struct task_struct *p,
const struct cpumask *new_mask)
{
struct rq *rq;
int weight;
BUG_ON(!dl_task(p));
/*
* Update only if the task is actually running (i.e.,
* it is on the rq AND it is not throttled).
*/
if (!on_dl_rq(&p->dl))
return;
weight = cpumask_weight(new_mask);
/*
* Only update if the process changes its state from whether it
* can migrate or not.
*/
if ((p->nr_cpus_allowed > 1) == (weight > 1))
return;
rq = task_rq(p);
/*
* The process used to be able to migrate OR it can now migrate
*/
if (weight <= 1) {
if (!task_current(rq, p))
dequeue_pushable_dl_task(rq, p);
BUG_ON(!rq->dl.dl_nr_migratory);
rq->dl.dl_nr_migratory--;
} else {
if (!task_current(rq, p))
enqueue_pushable_dl_task(rq, p);
rq->dl.dl_nr_migratory++;
}
update_dl_migration(&rq->dl);
}
/* Assumes rq->lock is held */
static void rq_online_dl(struct rq *rq)
{
if (rq->dl.overloaded)
dl_set_overload(rq);
if (rq->dl.dl_nr_running > 0)
cpudl_set(&rq->rd->cpudl, rq->cpu, rq->dl.earliest_dl.curr, 1);
}
/* Assumes rq->lock is held */
static void rq_offline_dl(struct rq *rq)
{
if (rq->dl.overloaded)
dl_clear_overload(rq);
cpudl_set(&rq->rd->cpudl, rq->cpu, 0, 0);
}
void init_sched_dl_class(void)
{
unsigned int i;
for_each_possible_cpu(i)
zalloc_cpumask_var_node(&per_cpu(local_cpu_mask_dl, i),
GFP_KERNEL, cpu_to_node(i));
}
#endif /* CONFIG_SMP */
static void switched_from_dl(struct rq *rq, struct task_struct *p)
{
if (hrtimer_active(&p->dl.dl_timer) && !dl_policy(p->policy))
hrtimer_try_to_cancel(&p->dl.dl_timer);
__dl_clear_params(p);
#ifdef CONFIG_SMP
/*
* Since this might be the only -deadline task on the rq,
* this is the right place to try to pull some other one
* from an overloaded cpu, if any.
*/
if (!rq->dl.dl_nr_running)
pull_dl_task(rq);
#endif
}
/*
* When switching to -deadline, we may overload the rq, then
* we try to push someone off, if possible.
*/
static void switched_to_dl(struct rq *rq, struct task_struct *p)
{
int check_resched = 1;
/*
* If p is throttled, don't consider the possibility
* of preempting rq->curr, the check will be done right
* after its runtime will get replenished.
*/
if (unlikely(p->dl.dl_throttled))
return;
if (task_on_rq_queued(p) && rq->curr != p) {
#ifdef CONFIG_SMP
if (rq->dl.overloaded && push_dl_task(rq) && rq != task_rq(p))
/* Only reschedule if pushing failed */
check_resched = 0;
#endif /* CONFIG_SMP */
if (check_resched) {
if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
}
}
}
/*
* If the scheduling parameters of a -deadline task changed,
* a push or pull operation might be needed.
*/
static void prio_changed_dl(struct rq *rq, struct task_struct *p,
int oldprio)
{
if (task_on_rq_queued(p) || rq->curr == p) {
#ifdef CONFIG_SMP
/*
* This might be too much, but unfortunately
* we don't have the old deadline value, and
* we can't argue if the task is increasing
* or lowering its prio, so...
*/
if (!rq->dl.overloaded)
pull_dl_task(rq);
/*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
* runqueue.
*/
if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline) &&
rq->curr == p)
resched_curr(rq);
#else
/*
* Again, we don't know if p has a earlier
* or later deadline, so let's blindly set a
* (maybe not needed) rescheduling point.
*/
resched_curr(rq);
#endif /* CONFIG_SMP */
} else
switched_to_dl(rq, p);
}
const struct sched_class dl_sched_class = {
.next = &rt_sched_class,
.enqueue_task = enqueue_task_dl,
.dequeue_task = dequeue_task_dl,
.yield_task = yield_task_dl,
.check_preempt_curr = check_preempt_curr_dl,
.pick_next_task = pick_next_task_dl,
.put_prev_task = put_prev_task_dl,
#ifdef CONFIG_SMP
.select_task_rq = select_task_rq_dl,
.set_cpus_allowed = set_cpus_allowed_dl,
.rq_online = rq_online_dl,
.rq_offline = rq_offline_dl,
.post_schedule = post_schedule_dl,
.task_woken = task_woken_dl,
#endif
.set_curr_task = set_curr_task_dl,
.task_tick = task_tick_dl,
.task_fork = task_fork_dl,
.task_dead = task_dead_dl,
.prio_changed = prio_changed_dl,
.switched_from = switched_from_dl,
.switched_to = switched_to_dl,
.update_curr = update_curr_dl,
#ifdef CONFIG_SCHED_HMP
.inc_hmp_sched_stats = inc_hmp_sched_stats_dl,
.dec_hmp_sched_stats = dec_hmp_sched_stats_dl,
.fixup_hmp_sched_stats = fixup_hmp_sched_stats_dl,
#endif
};
| code |
\begin{document}
\title{Vertex and edge metric dimensions of cacti}
\author{Jelena Sedlar$^{1,3}$,\\Riste \v Skrekovski$^{2,3}$ \\[0.3cm] {\small $^{1}$ \textit{University of Split, Faculty of civil
engineering, architecture and geodesy, Croatia}}\\[0.1cm] {\small $^{2}$ \textit{University of Ljubljana, FMF, 1000 Ljubljana,
Slovenia }}\\[0.1cm] {\small $^{3}$ \textit{Faculty of Information Studies, 8000 Novo
Mesto, Slovenia }}\\[0.1cm] }
\maketitle
\begin{abstract}
In a graph $G,$ a vertex (resp. an edge) metric generator is a set of vertices
$S$ such that any pair of vertices (resp. edges) from $G$ is distinguished by
at least one vertex from $S.$ The cardinality of a smallest vertex (resp.
edge) metric generator is the vertex (resp. edge) metric dimension of $G.$ In
\cite{SedSkreUnicyclic} we determined the vertex (resp. edge) metric dimension
of unicyclic graphs and that it takes its value from two consecutive integers.
Therein, several cycle configurations were introduced and the vertex (resp.
edge) metric dimension takes the greater of the two consecutive values only if
any of these configurations is present in the graph. In this paper we extend
the result to cactus graphs i.e. graphs in which all cycles are pairwise edge
disjoint. We do so by defining a unicyclic subgraph of $G$ for every cycle of
$G$ and applying the already introduced approach for unicyclic graphs which
involves the configurations. The obtained results enable us to prove the cycle
rank conjecture for cacti. They also yield a simple upper bound on metric
dimensions of cactus graphs and we conclude the paper by conjecturing that the
same upper bound holds in general.
\end{abstract}
\textit{Keywords:} vertex metric dimension; edge metric dimension; cactus
graphs, zero forcing number, cycle rank conjecture.
\textit{AMS Subject Classification numbers:} 05C12; 05C76
\section{Introduction}
The concept of metric dimension was first studied in the context of navigation
system in various graphical networks \cite{HararyVertex}. There the robot
moves from one vertex of the network to another, and some of the vertices are
considered to be a landmark which helps a robot to establish its position in a
network. Then the problem of establishing the smallest set of landmarks in a
network becomes a problem of determining a smallest metric generator in a
graph \cite{KhullerVertex}.
Another interesting application is in chemistry where the structure of a
chemical compound is frequently viewed as a set of functional groups arrayed
on a substructure. This can be modeled as a labeled graph where the vertex and
edge labels specify the atom and bond types, respectively, and the functional
groups and substructure are simply subgraphs of the labeled graph
representation. Determining the pharmacological activities related to the
feature of compounds relies on the investigation of the same functional groups
for two different compounds at the same point \cite{ChartrandVertex}. Various
other aspects of the notion were studied \cite{BuczkowskiVertex, FehrVertex,
KleinVertex, MelterVertex} and a lot of research was dedicated to the
behaviour of metric dimension with respect to various graph operations
\cite{CaceresVertex, ChartrandVertex, SaputroVertex, YeroCoronaVertex}.
In this paper, we consider only simple and connected graphs. By $d(u,v)$ we
denote the distance between a pair of vertices $u$ and $v$ in a graph $G$. A
vertex $s$ from $G$ \emph{distinguishes} or \emph{resolves} a pair of vertices
$u$ and $v$ from $G$ if $d(s,u)\not =d(s,v).$ We say that a set of vertices
$S\subseteq V(G)$ is a \emph{vertex metric generator,} if every pair of
vertices in $G$ is distinguished by at least one vertex from $S.$ The
\emph{vertex metric dimension} of $G,$ denoted by $\mathrm{dim}(G),$ is the
cardinality of a smallest vertex generator in $G$. This variant of metric
dimension, as it was introduced first, is sometimes called only metric
dimension and the prefix "vertex" is omitted.
In \cite{TratnikEdge} it was noticed that there are graphs in which none of
the smallest metric generators distinguishes all pairs of edges, so this was
the motivation to introduce the notion of the edge metric generator and
dimension, particularly to study the relation between $\mathrm{dim}(G)$ and
$\mathrm{edim}(G).$
The distance $d(u,vw)$ between a vertex $u$ and an edge $vw$ in a graph $G$ is
defined by $d(u,vw)=\min\{d(u,v),d(u,w)\}.$ Recently, two more variants of
metric dimension were introduced, namely the edge metric dimension and the
mixed metric dimension of a graph $G.$ Similarly as above, a vertex $s\in
V(G)$ \emph{distinguishes} two edges $e,f\in E(G)$ if $d(s,e)\neq d(s,f).$ So,
a set $S\subseteq V(G)$ is an \emph{edge metric generator} if every pair of
vertices is distinguished by at least one vertex from $S,$ and the cardinality
of a smallest such set is called the \emph{edge metric dimension} and denoted
by $\mathrm{edim}(G).$ Finally, a set $S\subseteq V(G)$ is a \emph{mixed
metric generator} if it distinguishes all pairs from $V(G)\cup E(G),$ and the
\emph{mixed metric dimension}, denoted by $\mathrm{mdim}(G)$, is defined as
the cardinality of a smallest such set in $G$.
This new variant also attracted a lot of attention \cite{GenesonEdge,
HuangApproximationEdge, PeterinEdge, ZhangGaoEdge, ZhuEdge, ZubrilinaEdge},
with one particular direction of research being the study of unicyclic graphs
and the relation of the two dimensions on them \cite{Knor, SedSkreBounds,
SedSkreUnicyclic}. The mixed metric dimension is then a natural next step, as
it unifies these two concepts. It was introduced in \cite{KelencMixed} and
further studied in \cite{SedSkrekMixed, SedSkreTheta}. A wider and systematic
introduction to these three variants of metric dimension can be found in
\cite{KelPhD}.
In this paper we establish the vertex and the edge metric dimension of cactus
graph, using the approach from \cite{SedSkreUnicyclic} where the two
dimensions were established for unicyclic graphs. The extension is not
straightforward, as in cactus graphs a problem with indistinguishable pairs of
edges and vertices may arise from connecting two cycles, so additional
condition will have to be introduced.
\section{Preliminaries}
A \emph{cactus} graph is any graph in which all cycles are pairwise edge
disjoint. Let $G$ be a cactus graph with cycles $C_{1},\ldots,C_{c}$ and let
$g_{i}$ denote the length of a cycle $C_{i}$ in $G.$ For a vertex $v$ of a
cycle $C_{i},$ denote by $T_{v}(C_{i})$ the connected component of
$G-E(C_{i})$ which contains $v.$ If $G$ is a unicyclic graph, then
$T_{v}(C_{i})$ is a tree, otherwise $T_{v}(C_{i})$ may contain a cycle. When
no confusion arises from that, we will use the abbreviated notation $T_{v}.$ A
\emph{thread} hanging at a vertex $v\in V(G)$ of degree $\geq3$ is any path
$u_{1}u_{2}\cdots u_{k}$ such that $u_{1}$ is a leaf, $u_{2},\ldots,u_{k}$ are
of degree $2,$ and $u_{k}$ is connected to $v$ by an edge. The number of
threads hanging at $v$ is denoted by $\ell(v).$
We say that a vertex $v\in V(C_{i})$ is \emph{branch-active} if $\deg(v)\geq4$
or $T_{v}$ contains a vertex of degree $\geq3$ distinct from $v$. We denote
the number of branch-active vertices on $C_{i}$ by $b(C_{i}).$ If a vertex $v$
from a cycle $C_{i}$ is branch-active, then $T_{v}$ contains both a pair of
vertices and a pair of edges which are not distinguished by any vertex outside
$T_{v}$, see Figure \ref{Fig_branching}.
\begin{figure}
\caption{A cactus graph with two cycles. On the cycle $C$ vertices $v$ and $w$
are branch-active, and a pair of vertices is marked in $T_{v}
\label{Fig_branching}
\end{figure}
Now, we will introduce a property called "branch-resolving" which a set of
vertices $S\subseteq V(G)$ must possess in order to avoid this problem of non
distinguished vertices (resp. edges) due to branching. First, a thread hanging
at a vertex $v$ of degree $\geq3$ is $S$\emph{-free} if it does not contain a
vertex from $S.$ Now, a set of vertices $S\subseteq V(G)$ is
\emph{branch-resolving} if at most one $S$-free thread is hanging at every
vertex $v\in V(G)$ of degree $\geq3$. Therefore, for every branch-resolving
set $S$ it holds that $\left\vert S\right\vert \geq L(G)$ where
\[
L(G)=\sum_{v\in V(G),\ell(v)>1}(\ell(v)-1).
\]
It is known in literature \cite{TratnikEdge, KhullerVertex} that for a tree
$T$ it holds that $\mathrm{dim}(G)=\mathrm{edim}(G)=L(G).$
Also, given a set of vertices $S\subseteq V(G),$ we say that a vertex $v\in
V(C_{i})$ is $S$\emph{-active} if $T_{v}$ contains a vertex from $S.$ The
number of $S$-active vertices on a cycle $C_{i}$ is denoted by $a_{S}(C_{i}).$
If $a_{S}(C_{i})\geq2$ for every cycle $C_{i}$ in $G,$ then we say the set $S$
is \emph{biactive}. For a biactive branch-resolving set $S$ the following
holds: if a vertex $v$ from a cycle $C_{i}$ is branch-active, then $T_{v}$
contains a vertex with two threads hanging at it or $T_{v}$ contains a cycle,
either way $T_{v}$ contains a vertex from $S,$ so $v$ is $S$-active.
Therefore, for a biactive branch-resolving set $S$ we have $a_{S}(C_{i})\geq
b(C_{i})$ for every $i$.
\begin{lemma}
\label{Lemma_biactive_branchResolving}Let $G$ be a cactus graph and let
$S\subseteq V(G)$ be a set of vertices in $G.$ If $S$ is a vertex (resp. an
edge) metric generator, then $S$ is a biactive branch-resolving set.
\end{lemma}
\begin{proof}
Suppose to the contrary that a vertex (resp. an edge) metric generator $S$ is
not a biactive branch-resolving set. If $S$ is not branch-resolving, then
there exists a vertex $v$ of degree $\geq3$ and two threads hanging at $v$
which do not contain a vertex from $S.$ Let $v_{1}$ and $v_{2}$ be two
neighbors of $v,$ each belonging to one of these two threads. Then $v_{1}$ and
$v_{2}$ (resp. $v_{1}v$ and $v_{2}v$) are not distinguished by $S,$ a contradiction.
Assume now that $S$ is not biactive. We may assume that $G$ has at least one
cycle, otherwise $G$ is a tree and there is nothing to prove. Notice that an
empty set $S$ cannot be either a vertex or an edge metric generator in a
cactus graph unless $G=K_{2}$ but then it is a tree. Therefore, if $S$ is not
biactive, there must exist a cycle $C_{i}$ with precisely one $S$-active
vertex $v$ and let $v_{1}$ and $v_{2}$ be the two neighbors of $v$ on $C_{i}.$
Then $v_{1}$ and $v_{2}$ (resp. $v_{1}v$ and $v_{2}v$) are not distinguished
by $S,$ a contradiction.
\end{proof}
The above lemma gives us a necessary condition for $S$ to be a vertex (resp.
an edge) metric generator in a cactus graph. In \cite{SedSkreUnicyclic}, a
more elaborate condition for unicyclic graphs was established, which is both
necessary and sufficient. In this paper we will extend that condition to
cactus graphs, but to do so we first need to introduce the following
definitions from \cite{SedSkreUnicyclic}. Let $C_{i}$ be a cycle in a cactus
graph $G$ and let $v_{i},$ $v_{j}$ and $v_{k}$ be three vertices of $C_{i},$
we say that $v_{i},$ $v_{j}$ and $v_{k}$ are a \emph{geodesic triple} on
$C_{i}$ if
\[
d(v_{i},v_{j})+d(v_{j},v_{k})+d(v_{i},v_{k})=|V(C_{i})|.
\]
It was shown in \cite{SedSkreBounds} that a biactive branch-resolving set with
a geodesic triple of $S$-active vertices on every cycle is both a vertex and
an edge metric generator. This result is useful for bounding the dimensions
from above. Also, we need the definition of the five graph configurations from
\cite{SedSkreUnicyclic}.
\begin{figure}
\caption{All six graphs shown in this figure are unicyclic graphs with a
biactive branch-resolving set $S\ $comprised of vertices $s_{i}
\label{Fig_configurations}
\end{figure}
\begin{definition}
Let $G$ be a cactus graph, $C$ a cycle in $G$ of the length $g$, and $S$ a
biactive branch-resolving set in $G$. We say that $C=v_{0}v_{1}\cdots v_{g-1}$
is \emph{canonically labeled} with respect to $S$ if $v_{0}$ is $S$-active and
$k=\max\{i:v_{i}$ is $S$-active$\}$ is as small as possible.
\end{definition}
Let us now introduce five configurations which a cactus graph can contain with
respect to a biactive branch-resolving set $S.$ All these configurations are
illustrated by Figure \ref{Fig_configurations}.\
\begin{definition}
Let $G$ be a cactus graph, $C$ a canonically labeled cycle in $G$ of the
length $g$, and $S$ a biactive branch-resolving set in $G$. We say that the
cycle $C$ \emph{with respect} to $S$ \emph{contains} configurations:
\begin{description}
\item {$\mathcal{A}$}. If $a_{S}(C)=2$, $g$ is even, and $k=g/2$;
\item {$\mathcal{B}$}. If $k\leq\left\lfloor g/2\right\rfloor -1$ and there is
an $S$-free thread hanging at a vertex $v_{i}$ for some $i\in\lbrack
k,\left\lfloor g/2\right\rfloor -1]\cup\lbrack\left\lceil g/2\right\rceil
+k+1,g-1]\cup\{0\}$;
\item {$\mathcal{C}$}. If $a_{S}(C)=2$, $g$ is even, $k\leq g/2$ and there is
an $S$-free thread of the length $\geq g/2-k$ hanging at a vertex $v_{i}$ for
some $i\in\lbrack0,k]$;
\item $\mathcal{D}$. If $k\leq\left\lceil g/2\right\rceil -1$ and there is an
$S$-free thread hanging at a vertex $v_{i}$ for some $i\in\lbrack
k,\left\lceil g/2\right\rceil -1]\cup\lbrack\left\lfloor g/2\right\rfloor
+k+1,g-1]\cup\{0\}$;
\item {$\mathcal{E}$}. If $a_{S}(C)=2$ and there is an $S$-free thread of the
length $\geq\left\lfloor g/2\right\rfloor -k+1$ hanging at a vertex $v_{i}$
with $i\in\lbrack0,k].$ Moreover, if $g$ is even, an $S$-free thread must be
hanging at the vertex $v_{j}$ with $j=g/2+k-i$.
\end{description}
\end{definition}
\begin{figure}
\caption{A cactus graph $G$ from Example \ref{Example_conf}
\label{Fig_configExample}
\end{figure}
Notice that only an even cycle can contain configuration $\mathcal{A}$ or
$\mathcal{C}$. Also, configurations $\mathcal{B}$ and $\mathcal{D}$ are almost
the same, they differ only if $C$ is odd where the index $i$ can take two more
values in $\mathcal{D}$ than in $\mathcal{B}.$ Finally, for configurations
$\mathcal{A}$, $\mathcal{C}$, and $\mathcal{E}$ it holds that $a_{S}(C)=2,$ so
there are only two $S$-active vertices on the cycle $C$ and hence no geodesic
triple of $S$-active vertices. On the other hand, for configurations
$\mathcal{B}$ and $\mathcal{D}$ there might be more than two $S$-active
vertices on the cycle $C,$ but the bounds $k\leq\left\lfloor g/2\right\rfloor
-1$ and $k\leq\left\lceil g/2\right\rceil -1$ again imply there is no geodesic
triple of $S$-active vertices on $C.$ Therefore, we can state the following
observation which is useful for constructing metric generators.
\begin{remark}
\label{Obs_geodTriple}If there is a geodesic triple of $S$-active vertices on
a cycle $C$ of a cactus graph $G,$ then $C$ does not contain any of the
configurations $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$, and
$\mathcal{E}$ with respect to $S.$
\end{remark}
The following result regarding configurations $\mathcal{A}$, $\mathcal{B}$,
$\mathcal{C}$, $\mathcal{D}$, and $\mathcal{E}$ was established for unicyclic
graphs (see Lemmas 6, 7, 13 and 14 from \cite{SedSkreUnicyclic}).
\begin{theorem}
\label{Lemma_configurations}Let $G$ be a unicyclic graph with the cycle $C$
and let $S$ be a biactive branch-resolving set in $G$. The set $S$ is a vertex
(resp. an edge) metric generator if and only if $C$ does not contain any of
configurations $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ (resp.
$\mathcal{A}$, $\mathcal{D}$, and $\mathcal{E}$) with respect to $S.$
\end{theorem}
In this paper we will extend this result to cactus graphs and then use it to
determine the exact value of the vertex and the edge metric dimensions of such
graphs. We first give an example how this approach with configurations can be
extended to cactus graphs.
\begin{example}
\label{Example_conf}Let $G$ be the cactus graph from Figure
\ref{Fig_configExample}. The graph $G$ contains six cycles and the set
$S=\{s_{1},s_{2},s_{3},s_{4},s_{5}\}$ is a smallest biactive branch-resolving
set in $G$. In the figure the set of $S$-active vertices on each cycle is
marked by a dashed circle. The cycle $C_{1}$ (resp. $C_{2}$, $C_{3}$, $C_{4}$,
$C_{5}$) with respect to $S$ contains configuration $\mathcal{A}$ (resp.
$\mathcal{B}$ and also $\mathcal{D}$, $\mathcal{C}$, $\mathcal{E}$ on odd
cycle, $\mathcal{E}$ on even cycle), so in each of these cycles there is a
pair of vertices and/or edges $x_{i}$ and $x_{i}^{\prime}$ which is not
distinguished by $S.$ The cycle $C_{6}$ does not contain any of the five
configurations as there is a geodesic triple of $S$-active vertices on
$C_{6},$ so all pairs of vertices and all pair of edges in $C_{6}$ are
distinguished by $S$.
\end{example}
\begin{figure}
\caption{A cactus graph $G$ with three cycles in which the unicyclic region
$G_{i}
\label{Fig_region}
\end{figure}
Besides this configuration approach, in cactus graphs an additional condition
will have to be introduced for the situation when a pair of cycles share a vertex.
\section{Metric generators in cacti}
Let $G$ be a cactus graph with cycles $C_{1},\ldots,C_{c}.$ We say that a
vertex $v\in V(G)$ \emph{gravitates} to a cycle $C_{i}$ in $G$ if there is a
path from $v$ to a vertex from $C_{i}$ which does not share any edge nor any
internal vertex with any cycle of $G.$ A \emph{unicyclic region} of the cycle
$C_{i}$ from $G$ is the subgraph $G_{i}$ of $G$ induced by all vertices that
gravitate to $C_{i}$ in $G.$ The notion of unicyclic region of a cactus graph
is illustrated by Figure \ref{Fig_region}.
Notice that each unicyclic region $G_{i}$ is a unicyclic graph with its cycle
being $C_{i}.$ Also, considering the example from Figure \ref{Fig_region}, one
can easily notice that two distinct unicyclic regions may not be vertex
disjoint, as the path connecting vertex $b_{2}$ and the cycle $C_{i}$ belongs
both to $G_{i}$ and $G_{j}.$ But, it does hold that all unicyclic regions
cover the whole $G.$ We say that a subgraph $H$ of a graph $G$ is an
\emph{isometric} subgraph if $d_{H}(u,v)=d_{G}(u,v)$ for every pair of
vertices $u,v\in V(H).$ The following observation is obvious.
\begin{remark}
The unicyclic region $G_{i}$ of a cycle $C_{i}$ is an isometric subgraph of
$G.$
\end{remark}
Finally, we say that a vertex $v$ from a unicyclic region $G_{i}$ is a
\emph{boundary} vertex if $v\in V(C_{j})$ for $j\not =i.$ In the example from
Figure \ref{Fig_region}, the boundary vertices of the region $G_{i}$ are
$b_{1}$ and $b_{2}$.
Let $S$ be a biactive branch-resolving set in $G$ and let $G_{i}$ be a
unicyclic region in $G.$ For the set $S$ we define the \emph{regional set}
$S_{i}$ as the set obtained from $S\cap V(G_{i})$ by introducing all boundary
vertices from $G_{i}$ to $S$. For example, in Figure \ref{Fig_region} the set
$S_{i}=\{s_{2},b_{1},b_{2}\}$ is the regional set in the region of the cycle
$C_{i}$.
\begin{lemma}
\label{Lemma_regionGenerator}Let $G$ be a cactus graph with $c$ cycles
$C_{1},\ldots,C_{c}$ and let $S\subseteq V(G)$. If $S$ is a vertex (resp.
edge) metric generator in $G,$ then the regional set $S_{i}$ is a vertex
(resp. edge) metric generator in the unicyclic region $G_{i}$ for every
$i\in\{1,\ldots,c\}$.
\end{lemma}
\begin{proof}
Suppose first that there is a cycle $C_{i}$ in $G$ such that the regional set
$S_{i}$ is not a vertex (resp. an edge) metric generator in the unicyclic
region $G_{i}.$ This implies that there exists a pair of vertices (resp.
edges) $x$ and $x^{\prime}$ in $G_{i}$ which are not distinguished by $S_{i}.$
We will show that $x$ and $x^{\prime}$ are not distinguished by $S$ in $G$
either. Suppose the contrary, i.e. there is a vertex $s\in S$ which
distinguishes $x$ and $x^{\prime}$ in $G.$ If $s\in V(G_{i}),$ then $s\in
S_{i}.$ Since $G_{i}$ is an isometric subgraph of $G,$ then $x$ and
$x^{\prime}$ would be distinguished by $s\in S_{i}$ in $G_{i},$ a
contradiction. Assume therefore that $s\not \in V(G_{i}).$ Notice that the
shortest path from every vertex (resp. edge) in $G_{i}$ to $s$ leads through a
same boundary vertex $b$ in $G_{i}.$ The definition of $S_{i}$ implies $b\in
S_{i},$ so $x$ and $x^{\prime}$ are not distinguished by $b.$ Therefore, we
obtain
\[
d(x,s)=d(x,b)+d(b,s)=d(x^{\prime},b)+d(b,s)=d(x^{\prime},s),
\]
so $x$ and $x^{\prime}$ are not distinguished by $s$ in $G,$ a contradiction.
\end{proof}
Notice that the condition from Lemma \ref{Lemma_regionGenerator} is necessary
for $S$ to be a metric generator, but it is not sufficient as is illustrated
by the graph shown in Figure \ref{Fig_incidence} in which every regional set
$S_{i}$ is a vertex (resp. an edge) metric generator in the corresponding
region $G_{i}$, but there still exists a pair of vertices (resp. edges) which
is not distinguished by $S,$ so $S$ is not a vertex (resp. an edge) metric
generator in $G$.
\begin{figure}
\caption{A cactus graph $G$ with three cycles and a smallest biactive
branch-resolving set $S=\{s_{1}
\label{Fig_incidence}
\end{figure}
Next, we will introduce notions which are necessary to state a condition which
will be both necessary and sufficient for a biactive branch-resolving set $S$
to be a vertex (resp. an edge) metric generator in a cactus graph $G.$ An
$S$\emph{-path} of the cycle $C_{i}$ is any subpath of $C_{i}$ which contains
all $S$-active vertices on $C_{i}$ and is of minimum possible length. We
denote an $S$\emph{-}path of the cycle $C_{i}$ by $P_{i}$. Notice that the
end-vertices of an $S$-path are always $S$-active, otherwise it would not be
shortest. For example, on the cycle $C_{2}$ in Figure \ref{Fig_incidence}
there are two different paths connecting $S$-active vertices $v$ and $w,$ one
is of the length $3$ and the other of length $5,$ so the shorter one is an
$S$-path. Also, an $S$\emph{-}path $P_{i}$ of a cycle $C_{i}$ may not be
unique, as there may exist several shortest subpaths of $C_{i}$ containing all
$S$-active vertices on $C_{i},$ but if the length of $P_{i}$ satisfies
$\left\vert P_{i}\right\vert \leq\left\lceil g_{i}/2\right\rceil -1$ then
$P_{i}$ is certainly unique and its end-vertices are $v_{0}$ and $v_{k}$ in
the canonical labelling of $C_{i}.$
\begin{definition}
Let $G$ be a cactus graph with cycles $C_{1},\ldots,C_{c}$ and let $S$ be a
biactive branch-resolving set in $G.$ We say that a vertex $v\in V(C_{i})$ is
\emph{vertex-critical} (resp. \emph{edge-critical}) on $C_{i}$ with respect to
$S$ if $v$ is an end-vertex of $P_{i}$ and $\left\vert P_{i}\right\vert
\leq\left\lfloor g_{i}/2\right\rfloor -1$ (resp. $\left\vert P_{i}\right\vert
\leq\left\lceil g_{i}/2\right\rceil -1$).
\end{definition}
Notice that the notion of a vertex-critical and an edge-critical vertex
differs only on odd cycles. We say that two distinct cycles $C_{i}$ and
$C_{j}$ of a cactus graph $G$ are \emph{vertex-critically incident} (resp.
\emph{edge-critically incident}) with respect to $S$ if $C_{i}$ and $C_{j}$
share a vertex $v$ which is vertex-critical (resp. edge-critical) with respect
to $S$ on both $C_{i}$ and $C_{j}$. Notice that on odd cycles the required
length of an $S$-path $P_{i}$ for $v$ to be vertex-critical differs from the
one required for $v$ to be edge-critical, while on even cycles the required
length is the same.
To illustrate this notion, let us consider the cycle $C_{2}$ in the graph from
Figure \ref{Fig_incidence}. Vertices $v$ and $w$ are both vertex-critical and
edge-critical on $C_{2}$ with respect to $S$ from the figure. Vertex $v$
belongs also to $C_{1}$ and it is also both vertex- and edge-critical on
$C_{1}.$ Therefore, cycles $C_{1}$ and $C_{2}$ are both vertex- and
edge-critically incident, the consequence of which is that a pair of vertices
$v_{1}$ and $v_{2}$ which are neighbors of $v$ and a pair of edges $v_{1}v$
and $v_{2}v$ which are incident to $v$ are not distinguished by $S.$ On the
other hand, vertex $w$ belongs also to $C_{3}$ on which it is edge-critical,
but it is not vertex-critical since $P_{3}$ is not short enough. So, $C_{2}$
and $C_{3}$ are edge-critically incident, but not vertex-critically incident.
Consequently, a pair of edges $w_{1}w$ and $w_{2}w$ is not distinguished by
$S,$ but a pair of vertices $w_{1}$ and $w_{2}$ is distinguished by $S.$ We
will show in the following lemma that this holds in general.
\begin{lemma}
\label{Lemma_incidence}Let $G$ be a cactus graph with $c$ cycles $C_{1}
,\ldots,C_{c}$ and let $S$ be a biactive branch-resolving set in $G$. If $S$
is a vertex (resp. an edge) metric generator in $G,$ then there is no pair of
cycles in $G$ which are vertex-critically (resp. edge critically) incident
with respect to $S$.
\end{lemma}
\begin{proof}
Let $S$ be a vertex (resp. an edge) metric generator in $G.$ Suppose the
contrary, i.e. there are two distinct cycles $C_{i}$ and $C_{j}$ in $G$ which
are vertex-critically (resp. edge-critically) incident with respect to $S$.
This implies that $C_{i}$ and $C_{j}$ share a vertex $v$ which is
vertex-critical (resp. edge-critical) on both $C_{i}$ and $C_{j}$. Let $x$ and
$x^{\prime}$ be a pair of vertices (resp. edges) which are neighbors (resp.
incident) to $v$ on cycles $C_{i}$ and $C_{j}$ respectively, but which are not
contained on paths $P_{i}$ and $P_{j}.$ The length of paths $P_{i}$ and
$P_{j}$ which is required by the definition of a vertex-critical (resp.
edge-critical) vertex implies that a shortest path from both $x$ and
$x^{\prime}$ to all vertices from $P_{i}$ and $P_{j}$ leads through $v.$ Since
$P_{i}$ and $P_{j}$ contain all $S$-active vertices on $C_{i}$ and $C_{j},$
this further implies that a shortest path from both $x$ and $x^{\prime}$ to
all vertices from $S$ leads through $v.$ Since $d(x,v)=d(x^{\prime},v)$, it
follows that $x$ and $x^{\prime}$ are not distinguished by $S,$ a contradiction.
\end{proof}
Each of Lemmas \ref{Lemma_regionGenerator} and \ref{Lemma_incidence} gives a
necessary condition for a biactive branch-resolving set $S$ to be a vertex
(resp. an edge) metric generator in a cactus graph $G$. Let us now show that
these two necessary conditions taken together form a sufficient condition for
$S$ to be a vertex (resp. an edge) metric generator.
\begin{lemma}
\label{Lemma_sufficient}Let $G$ be a cactus graph with $c$ cycles
$C_{1},\ldots,C_{c}$ and let $S$ be a biactive branch-resolving set in $G$. If
a regional set $S_{i}$ is a vertex (resp. an edge) metric generator in the
unicyclic region $G_{i}$ for every $i=1,\ldots,c$ and there are no
vertex-critically (resp. edge-critically) incident cycles in $G,$ then $S$ is
a vertex (resp. an edge) metric generator in $G.$
\end{lemma}
\begin{proof}
Let $x$ and $x^{\prime}$ be a pair of vertices (resp. edges) from $G.$ We want
to show that $S$ distinguishes $x$ and $x^{\prime}.$ In order to do so, we
distinguish the following two cases.
\noindent\textbf{Case 1:}\emph{ }$x$\emph{ and }$x^{\prime}$\emph{
belong to a same unicyclic region }$G_{i}$\emph{ of }$G.$ Since the regional
set $S_{i}$ is a vertex (resp. an edge) metric generator in $G_{i},$ there is
a vertex $s\in S_{i}$ which distinguishes $x$ and $x^{\prime}$ in $G_{i}.$ If
$s\in S,$ then the fact that $G_{i}$ is an isometric subgraph of $G$ implies
that the pair $x$ and $x^{\prime}$ is distinguished by the same $s$ in $G$ as
well. Assume therefore that $s\not \in S,$ so the definition of the regional
set $S_{i}$ implies that $s$ is a boundary vertex of $G_{i}.$ Let $s^{\prime
}\in S$ be a vertex in $G$ such that the shortest path from $s^{\prime}$ to
both $x$ and $x^{\prime}$ leads through the boundary vertex $s.$ Recall that
such a vertex $s^{\prime}$ must exist since $S$ is biactive. The fact that $s$
distinguishes $x$ and $x^{\prime}$ in $G_{i}$, implies $d(x,s)\not =
d(x^{\prime},s),$ which further implies
\[
d(x,s^{\prime})=d(x,s)+d(s,s^{\prime})\not =d(x^{\prime},s)+d(s,s^{\prime
})=d(x^{\prime},s^{\prime}),
\]
so the pair $x$ and $x^{\prime}$ is distinguished by $S$ in $G$.
\noindent\textbf{Case 2:} $x$\emph{ and }$x^{\prime}$\emph{ do not
belong to a same unicyclic region of }$G.$ Let us assume that $x$ belongs to
$G_{i}$ and $x^{\prime}$ does not belong to $G_{i},$ and say it belongs to
$G_{j}$ for $j\not =i$. If $x$ and $x^{\prime}$ are distinguished by a vertex
$s\in S\cap V(G_{i}),$ then the claim is proven, so let us assume that $x$ and
$x^{\prime}$ are not distinguished by any $s\in S\cup V(G_{i}).$ Since $x$ and
$x^{\prime}$ do not belong to a same unicyclic region, there exists a boundary
vertex $b$ of the unicyclic region $G_{i}$ such that the shortest path from
$x$ to $x^{\prime}$ leads through $b.$ Let $s_{b}$ be a vertex from $S$ such
that the shortest path from $x$ to $s_{b}$ also leads through $b,$ which must
exist since $S$ is biactive. We want to prove that $x$ and $x^{\prime}$ are
distinguished by $s_{b}.$ Let us suppose the contrary, i.e. $d(x,s_{b}
)=d(x^{\prime},s_{b}).$ Then we have the following
\[
d(x,b)+d(b,s_{b})=d(x,s_{b})=d(x^{\prime},s_{b})\leq d(x^{\prime}
,b)+d(b,s_{b}),
\]
from which we obtain
\begin{equation}
d(x,b)\leq d(x^{\prime},b). \label{For_riste}
\end{equation}
Now, we distinguish the following two subcases.
\noindent\textbf{Subcase 2.a:} $b$\emph{ does not belong to }
$V(C_{i}).$ Notice that by the definition of the unicyclic region, any acyclic
structure hanging at $b$ in $G$ is not included in $G_{i}$, as is illustrated
by $b_{2}$ from Figure \ref{Fig_region}, which implies $b$ is a leaf in
$G_{i}.$ Let $b^{\prime}$ be the only neighbor of $b$ in $G_{i}.$ The
inequality (\ref{For_riste}) further implies $d(x,b^{\prime})<d(x^{\prime
},b^{\prime})$ since $x$ belongs to $G_{i}$ and $x^{\prime}$ does not. Let
$v_{0}$ be the vertex from $C_{i}$ closest to $b,$ which implies $v_{0}$ is
$S$-active on $C_{i}.$ Let $v_{k}$ be an $S$-active vertex on $C_{i}$ distinct
from $v_{0},$ such a vertex $v_{k}$ must exist on $C_{i}$ because we assumed
$S$ is biactive. So, we have
\[
d(x,v_{k})\leq d(x,b^{\prime})+d(b^{\prime},v_{k})<d(x^{\prime},b^{\prime
})+d(b^{\prime},v_{k})=d(x^{\prime},v_{k}).
\]
Let $s_{k}$ be a vertex from $S$ which belongs to the connected component of
$G-E(C_{i})$ containing $v_{k}.$ Then we have
\[
d(x,s_{k})\leq d(x,v_{k})+d(v_{k},s_{k})<d(x^{\prime},v_{k})+d(v_{k}
,s_{k})=d(x^{\prime},s_{k}).
\]
Therefore, $S$ distinguishes $x$ and $x^{\prime},$ so $S$ is a vertex (resp.
an edge) metric generator in $G.$
\noindent\textbf{Subcase 2.b:} $b$\emph{ belongs to }$V(C_{i}).$ Since
$b$ is a boundary vertex of the unicyclic region $G_{i},$ this implies there
is a cycle $C_{l},$ for $l\not =i,$ such that $b\in V(C_{l})$. Therefore,
cycles $C_{i}$ and $C_{l}$ share the vertex $b.$ Notice that any acyclic
structure hanging at $b$ belongs to both $G_{i}$ and $G_{l}$, as is
illustrated by $b_{1}$ from Figure \ref{Fig_region}. If $x^{\prime}$ belongs
to $G_{l},$ then neither $x$ nor $x^{\prime}$ can belong to an acyclic
structure hanging at $b,$ as we assumed that $x$ and $x^{\prime}$ do not
belong to a same component. On the other hand, if $x^{\prime}$ does not belong
to $G_{l}$ and $x$ belongs to an acyclic structure hanging at $b,$ then we
switch $G_{i}$ by $G_{l}$ and assume that $x$ belongs to $G_{l}.$ This way we
assure that neither $x$ not $x^{\prime}$ belong to an acyclic structure
hanging at $b.$
If $d(x,b)<d(x^{\prime},b),$ let $v_{k}$ be an $S$-active vertex on $C_{i}$
distinct from $b,$ which must exist as $S$ is biactive. From (\ref{For_riste})
we obtain
\[
d(x,v_{k})\leq d(x,b)+d(b,v_{k})<d(x^{\prime},b)+d(b,v_{k})=d(x^{\prime}
,v_{k}),
\]
so similarly as in previous subcase $x$ and $x^{\prime}$ are distinguished by
a vertex $s_{k}\in S$ which belongs to the connected component of $G-E(C_{i})$
which contains $v_{k}.$
Therefore, assume that $d(x,b)=d(x^{\prime},b).$ If a shortest path from $x$
to all $S$-active vertices on $C_{i}$ and a shortest path from $x^{\prime}$ to
all $S$-active vertices on $C_{l}$ leads through $b,$ then $x^{\prime}$
belongs to $C_{l},$ i.e. $j=l$, and the pair of cycles $C_{i}$ and $C_{l}$ are
vertex-critically (resp. edge-critically) incident, a contradiction. So, we
may assume there is an $S$-active vertex $v_{k}$, say on $C_{i}$, such that a
shortest path from $x$ to $v_{k}$ does not lead through $b.$ Therefore,
\[
d(x,v_{k})<d(x,b)+d(b,v_{k})=d(x^{\prime},b)+d(b,v_{k})=d(x^{\prime},v_{k}).
\]
But now, similarly as in previous cases we have that $x$ and $x^{\prime}$ are
distinguished by a vertex $s_{k}\in S$ which is contained in the connected
component of $G-E(C_{i})$ containing $v_{k}.$
\end{proof}
Let us now relate these results with configurations $\mathcal{A},$
$\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$ and $\mathcal{E}$.
\begin{lemma}
\label{Obs_RegionConfiguration}Let $G$ be a cactus graph and let $S$ be a
biactive branch-resolving set in $G.$ A cycle $C_{i}$ of the graph $G$
contains configuration $\mathcal{A}$ (or $\mathcal{B}$ or $\mathcal{C}$ or
$\mathcal{D}$ or $\mathcal{E}$) with respect to $S$ in $G$ if and only if
$C_{i}$ contains the respective configuration with respect to $S_{i}$ in
$G_{i}.$
\end{lemma}
\begin{proof}
Let $G$ be a cactus graph with cycles $C_{1},\ldots,C_{c}$ and let $S$ be a
biactive branch-resolving set in $G.$ Since $S$ is a biactive set, for every
boundary vertex $b$ in a unicyclic region $G_{i},$ there is a vertex $s\in S$
such that the shortest path from $s$ to $C_{i}$ leads through $b,$ as it is
shown in Figure \ref{Fig_region}. This implies that the set of $S$-active
vertices on $C_{i}$ in $G$ is the same as the set of $S_{i}$-active vertices
on $C_{i}$ in $G_{i}.$ Since the presence of configurations $\mathcal{A}$,
$\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$ and $\mathcal{E}$ on a cycle
$C_{i}$, with respect to a set $S,$ by definition depends on the position of
$S$-active vertices on $C_{i},$ the claim follows.
\end{proof}
Notice that Lemmas \ref{Lemma_regionGenerator}, \ref{Lemma_incidence} and
\ref{Lemma_sufficient} give us a condition for $S$ to be a vertex (resp. an
edge) metric generator in a cactus graph, which is both necessary and
sufficient. In the light of Lemma \ref{Obs_RegionConfiguration}, we can
further apply Theorem \ref{Lemma_configurations} to obtain the following
result which unifies all our results.
\begin{theorem}
\label{Cor_generatorCharacterization}Let $G$ be a cactus graph with $c$ cycles
$C_{1},\ldots,C_{c}$ and let $S$ be a biactive branch-resolving set in $G$.
The set $S$ is a vertex (resp. an edge) metric generator if and only if each
cycle $C_{i}$ does not contain any of the configurations $\mathcal{A}$,
$\mathcal{B}$, and $\mathcal{C}$ (resp. $\mathcal{A}$, $\mathcal{D}$, and
$\mathcal{E}$) and there are no vertex-critically (resp. edge-critically)
incident cycles in $G$ with respect to $S$.
\end{theorem}
\begin{proof}
Let $S$ be a vertex (resp. an edge) metric generator in $G$. Then Lemma
\ref{Lemma_regionGenerator} implies that $S_{i}$ is a vertex (resp. edge)
metric generator in the unicyclic region $G_{i}$ and the Theorem
\ref{Lemma_configurations} further implies that every cycle does not contain
any of the configurations $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$
(resp. $\mathcal{A}$, $\mathcal{D}$, and $\mathcal{E}$) with respect to $S.$
Also, Lemma \ref{Lemma_incidence} implies that there are no vertex-critically
(resp. edge-critically) incident cycles in $G$ with respect to $S$.
The other direction is the consequence of Lemma \ref{Lemma_sufficient} and
Theorem \ref{Lemma_configurations}.
\end{proof}
\section{Metric dimensions in cacti}
\begin{figure}
\caption{A cactus graph with three cycles and two different smallest biactive
branch-resolving sets $S=\{s_{1}
\label{Fig_avoiding}
\end{figure}Let $G$ be a cactus graph and $S$ a smallest biactive
branch-resolving set in $G.$ Then
\[
\left\vert S\right\vert =L(G)+B(G),
\]
where $B(G)=\sum_{i=1}^{c}\max\{0,2-b(C_{i})\}.$ If $b(C_{i})\geq2$, then the
set of $S$-active vertices on $C_{i}$ is the same for all smallest biactive
branch-resolving sets $S$. The set of $S$-active vertices may differ only on
cycles $C_{i}$ with $b(C_{i})<2.$ Therefore, such a cycle $C_{i}$ may contain
one of the configurations with respect to one smallest biactive
branch-resolving set, but not with respect to another. This is illustrated by
Figure \ref{Fig_avoiding}.
\begin{definition}
We say that a cycle $C_{i}$ from a cactus graph $G$ is $\mathcal{ABC}
$\emph{-negative} (resp. $\mathcal{ADE}$\emph{-negative}), if there exists a
smallest biactive branch-resolving set $S$ in $G$ such that $C_{i}$ does not
contain any of the configurations $\mathcal{A},$ $\mathcal{B},$ and
$\mathcal{C}$ (resp. $\mathcal{A},$ $\mathcal{D},$ and $\mathcal{E}$) with
respect to $S.$ Otherwise, we say that $C_{i}$ is $\mathcal{ABC}
$\emph{-positive} (resp. $\mathcal{ADE}$\emph{-positive}). The number of
$\mathcal{ABC}$-positive (resp. $\mathcal{ADE}$-positive) cycles in $G$ is
denoted by $c_{\mathcal{ABC}}(G)$ (resp. $c_{\mathcal{ADE}}(G)$).
\end{definition}
For two distinct smallest biactive branch-resolving sets $S$, the set of
$S$-active vertices may differ only on cycles with $b(C_{i})\leq1.$ Let
$C_{i}$ and $C_{j}$ be two such cycles in $G$ and notice that the choice of
the vertices included in $S$ from the region of $C_{i}$ is independent of the
choice from $C_{j}.$ Therefore, there exists at least one smallest biactive
branch-resolving set $S$ such that every $\mathcal{ABC}$-negative (resp.
$\mathcal{ADE}$-negative) avoids the three configurations with respect to $S.$
Notice that there may exist more than one such set $S,$ and in that case they
all have the same size, so among them we may choose the one with the smallest
number of vertex-critical (resp. edge-critical) incidencies. Therefore, we say
that a smallest biactive branch-resolving set $S$ is \emph{nice} if every
$\mathcal{ABC}$-negative (resp. $\mathcal{ADE}$-negative) cycle $C_{i}$ does
not contain the three configurations with respect to $S$ and the number of
vertex-critically (resp. edge-critically) incident pairs of cycles with
respect to $S$ is the smallest possible. The niceness of a smallest biactive
branch-resolving set is illustrated by Figure \ref{Fig_optimality}.
\begin{figure}
\caption{A cactus graph with three cycles and two different smallest biactive
branch-resolving sets $S=\{s_{1}
\label{Fig_optimality}
\end{figure}
A set $S\subseteq V(G)$ is a \emph{vertex cover} if it contains a least one
end-vertex of every edge in $G.$ The cardinality of a smallest vertex cover in
$G$ is the \emph{vertex cover number} denoted by $\tau(G).$ Now, let $G$ be a
cactus graph and let $S$ be a nice smallest biactive branch-resolving set in
$G.$ We define the \emph{vertex-incident graph} $G_{vi}$ (resp.
\emph{edge-incident graph} $G_{ei}$) as a graph containing a vertex for every
cycle in $G,$ where two vertices are adjacent if the corresponding cycles in
$G$ are $\mathcal{ABC}$-negative and vertex-critically incident (resp.
$\mathcal{ADE}$-negative and edge-critically incident) with respect to $S$.
For example, if we consider the cactus graph $G$ from Figure \ref{Fig_example}
, then $V(G_{vi})=V(G_{ei})=\{C_{i}:i=1,\ldots,7\},$ where $E(G_{vi}
)=\{C_{3}C_{4},C_{4}C_{5}\}$ and $E(G_{ei})=\{C_{2}C_{3},C_{3}C_{4},C_{4}
C_{5}\}.$
We are now in a position to establish the following theorem which gives us the
value of the vertex and the edge metric dimensions in a cactus graph.
\begin{theorem}
\label{Tm_dim}Let $G$ be a cactus graph. Then
\[
\mathrm{dim}(G)=L(G)+B(G)+c_{\mathcal{ABC}}(G)+\tau(G_{vi}),
\]
and
\[
\mathrm{edim}(G)=L(G)+B(G)+c_{\mathcal{ADE}}(G)+\tau(G_{ei}).
\]
\end{theorem}
\begin{proof}
If there is a cycle in $G$ with $b(C)=0,$ then $G$ is a unicyclic graph. For
unicyclic graphs with $b(C)=0$ we have $B(G)=2$ and if the three
configurations $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ (resp.
$\mathcal{A}$, $\mathcal{D}$, $\mathcal{E}$) cannot be avoided by choosing two
vertices into $S,$ then $c_{\mathcal{ABC}}(G)=1$ (resp. $c_{\mathcal{ADE}
}(G)=1$) and also the third vertex must be introduced to $S,$ so the claim
holds. In all other situations $B(G)$ equals the number of cycles in $G$ with
$b(C_{i})=1.$
Let $S$ be a smallest vertex (resp. edge) metric generator in $G$. Due to
Lemma \ref{Lemma_biactive_branchResolving} the set $S$ must be
branch-resolving. Let $S_{1}\subseteq S$ be a smallest branch-resolving set
contained in $S,$ so $\left\vert S_{1}\right\vert =L(G).$ Since according to
Lemma \ref{Lemma_biactive_branchResolving} the set $S$ must also be biactive,
let $S_{2}\subseteq S\backslash S_{1}$ be a smallest set such that $S_{1}\cup
S_{2}$ is biactive. Obviously, $S_{1}\cap S_{2}=\phi$ and $\left\vert
S_{2}\right\vert =B(G).$
Since $S_{1}\cup S_{2}$ is a smallest biactive branch-resolving set in $G,$ it
follows that every $\mathcal{ABC}$-positive (resp. $\mathcal{ADE}$-positive)
cycle in $G$ contains at least one of the three configurations with respect to
$S_{1}\cup S_{2},$ so according to Theorem \ref{Cor_generatorCharacterization}
the set $S_{1}\cup S_{2}$ is not a vertex (resp. an edge) metric generator in
$G.$ Therefore, each $\mathcal{ABC}$-positive (resp. $\mathcal{ADE}$-positive)
cycle $C_{i}$ must contain a vertex $s_{i}\in S\backslash(S_{1}\cup S_{2}),$
where we may assume that $s_{i}$ is chosen so that it forms a geodesic triple
on $C_{i}$ with vertices from $S_{1}\cup S_{2},$ so according to Observation
\ref{Obs_geodTriple} the cycle $C_{i}$ will not contain any of the
configurations with respect to $S.$ Denote by $S_{3}$ the set of vertices
$s_{i}$ from every $\mathcal{ABC}$-positive (resp. $\mathcal{ADE}$-positive)
cycle in $G.$ Obviously, $S_{3}\subseteq S,$ $S_{1}\cap S_{2}\cap S_{3}=\phi$
and $\left\vert S_{3}\right\vert =c_{\mathcal{ABC}}(G)$ (resp. $\left\vert
S_{3}\right\vert =c_{\mathcal{ADE}}(G)$).
Notice that $S_{1}\cup S_{2}\cup S_{3}$ is a biactive branch-resolving set in
$G$ such that every cycle $C_{i}$ in $G$ does not contain any of the
configurations $\mathcal{A},$ $\mathcal{B},$ $\mathcal{C}$ (resp.
$\mathcal{A},$ $\mathcal{D},$ $\mathcal{E}$) with respect to it. Notice that
$S_{1}\cup S_{2}\cup S_{3}$ still may not be a vertex (resp. an edge) metric
generator, as there may exist vertex-critically (resp. edge-critically)
incident cycles in $G$ with respect to $S_{1}\cup S_{2}\cup S_{3}.$ Since $S$
is a smallest vertex (resp. edge) metric generator, we may assume that $S_{2}$
is chosen so that a smallest biactive branch-resolving set $S_{1}\cup S_{2}$
is nice. Therefore, the graph $G_{vi}$ (resp. $G_{ei}$) contains an edge for
every pair of cycles in $G$ which are $\mathcal{ABC}$-negative and
vertex-critically incident (resp. $\mathcal{ADE}$-negative and edge-critically
incident) with respect to $S_{1}\cup S_{2}.$ Let us denote $S_{4}
=S\backslash(S_{1}\cup S_{2}\cup S_{3}).$ For each edge $xy$ in $G_{vi}$
(resp. $G_{ei}$) the set $S_{4}$ must contain a vertex from $C_{x}$ or
$C_{y},$ chosen so that it forms a geodesic triple of $S$-active vertices on
$C_{x}$ or $C_{y}$ with other vertices from $S.$ Therefore, $S_{4}$ must
contain at least $\tau(G_{vi})$ (resp. $\tau(G_{ei})$) vertices in order for
$S$ to be a vertex (resp. an edge) metric generator. Since $S$ is a smallest
vertex (resp. edge) metric generator, it must hold $\left\vert S_{4}
\right\vert =\tau(G_{vi})$ (resp. $\left\vert S_{4}\right\vert =\tau(G_{ei})$).
We have established that $S=S_{1}\cup S_{2}\cup S_{3}\cup S_{4},$ where
$S_{1}\cap S_{2}\cap S_{3}\cap S_{4},$ so $\left\vert S\right\vert =\left\vert
S_{1}\right\vert +\left\vert S_{2}\right\vert +\left\vert S_{3}\right\vert
+\left\vert S_{4}\right\vert .$ Since we also established $\left\vert
S_{1}\right\vert =L(G),$ $\left\vert S_{2}\right\vert =B(G),$ $\left\vert
S_{3}\right\vert =c_{\mathcal{ABC}}(G)$ (resp. $\left\vert S_{3}\right\vert
=c_{\mathcal{ADE}}(G)$) and $\left\vert S_{4}\right\vert =\tau(G_{vi})$ (resp.
$\left\vert S_{4}\right\vert =\tau(G_{ei})$), the proof is finished.
\end{proof}
The formulas for the calculation of metric dimensions from the above theorem
are illustrated by the following examples.
\begin{example}
Let us consider the cactus graph $G$ from Figure \ref{Fig_avoiding}. The set
$S^{\prime}=\{s_{1}^{\prime},s_{2}^{\prime}\}$ is an optimal smallest biactive
branch-resolving set in $G.$ But, since $C_{1}$ is both $\mathcal{ABC}$- and
$\mathcal{ADE}$-positive, the set $S^{\prime}$ is neither a vertex nor an edge
metric generator. Let $s_{3}^{\prime}$ be any vertex from $C_{1}$ which forms
a geodesic triple with two $S^{\prime}$-active vertices on $C_{1}.$ Then the
set $S=\{s_{1}^{\prime},s_{2}^{\prime},s_{3}^{\prime}\}$ is a smallest vertex
(resp. edge) metric generator, so we obtain
\[
\mathrm{dim}(G)=L(G)+B(G)+c_{\mathcal{ABC}}(G)+\tau(G_{vi})=0+2+1+0=3.
\]
and
\[
\mathrm{edim}(G)=L(G)+B(G)+c_{\mathcal{ADE}}(G)+\tau(G_{ei})=0+2+1+0=3.
\]
\end{example}
\begin{figure}
\caption{A cactus graph $G$ from Example \ref{Example_calc}
\label{Fig_example}
\end{figure}
Let us now give an example of determining the vertex and the edge metric
dimensions on a cactus graph which is a bit bigger.
\begin{example}
\label{Example_calc}Let $G$ be the cactus graph from Figure \ref{Fig_example}.
The following table gives the choice and the number of vertices for every
expression in the formulas for metric dimensions from Theorem \ref{Tm_dim}
\[
\begin{tabular}
[c]{|l||l|l|}\hline
& vertices & value\\\hline\hline
$L(G)$ & $s_{1},s_{2},s_{3},s_{4}$ & $4$\\\hline
$B(G)$ & $s_{5}$ & $1$\\\hline
$c_{\mathcal{ABC}}(G)$ & $s_{6}$ & $1$\\\hline
$\tau(G_{vi})$ & $s_{7}$ & $1$\\\hline
$c_{\mathcal{ADE}}(G)$ & $s_{8},s_{9}$ & $2$\\\hline
$\tau(G_{ei})$ & $s_{10},s_{11}$ & $2$\\\hline
\end{tabular}
\ \ \
\]
Therefore, the set $S=\{s_{1},s_{2},s_{3},s_{4},s_{5},s_{6},s_{7}\}$ is a
smallest vertex metric generator, so we obtain
\[
\mathrm{dim}(G)=L(G)+B(G)+c_{\mathcal{ABC}}(G)+\tau(G_{vi})=4+1+1+1=7.
\]
On the other hand, the set $S=\{s_{1},s_{2},s_{3},s_{4},s_{5},s_{8}
,s_{9},s_{10},s_{11}\}$ is a smallest edge metric generator, so we have
\[
\mathrm{edim}(G)=L(G)+B(G)+c_{\mathcal{ADE}}(G)+\tau(G_{ei})=4+1+2+2=9.
\]
\end{example}
Notice that $c_{\mathcal{ABC}}(G)\leq c.$ Also, if $\tau(G_{vi})\geq1$ then
$c_{\mathcal{ABC}}(G)+\tau(G_{vi})<c.$ The similar holds for $c_{\mathcal{ADE}
}(G)$ and $\tau(G_{ei}).$ From this and Theorem \ref{Tm_dim} we immediately
obtain the following result.
\begin{corollary}
\label{Cor_boundB}Let $G$ be a cactus graph with $c$ cycles. Then
$\mathrm{dim}(G)\leq L(G)+B(G)+c$ and $\mathrm{edim}(G)\leq L(G)+B(G)+c.$
\end{corollary}
Further, notice that in a cactus graph with at least two cycles every cycle
has at least one branch-active vertex. Therefore, in such a cactus graph $G,$
we have $B(G)=\sum_{i=1}^{c}\max\{0,2-b(C_{i})\}\leq c$ with equality holding
only if $b(C_{i})=1$ for every cycle $C_{i}$ in $G$. Since $c_{\mathcal{ABC}
}(G)+\tau(G_{vi})=c$ if and only if $c_{\mathcal{ABC}}(G)=c$ and $\tau
(G_{vi})=0,$ and similarly holds for the edge version of metric dimension,
Theorem \ref{Tm_dim} immediately implies the following simple upper bound on
the vertex and edge metric dimensions of a cactus graph $G$.
\begin{corollary}
Let $G$ be a cactus graph with $c\geq2$ cycles. Then
\[
\mathrm{dim}(G)\leq L(G)+2c
\]
with equality holding if and only if every cycle in $G$ is $\mathcal{ABC}
$-positive and contains precisely one branch-active vertex.
\end{corollary}
\begin{corollary}
Let $G$ be a cactus graph with $c\geq2$ cycles. Then
\[
\mathrm{edim}(G)\leq L(G)+2c
\]
with equality holding if and only if every cycle in $G$ is $\mathcal{ADE}
$-positive and contains precisely one branch-active vertex.
\end{corollary}
Notice that the upper bound from the above corollary may not hold for $c=1$,
i.e. for unicyclic graphs, as for the cycle $C$ of unicyclic graph it may hold
that $b(C)=0.$ As for the tightness of these bounds, we have the following proposition.
\begin{proposition}
For every pair of integers $b\geq0$ and $c\geq2,$ there is a cactus graph $G$
with $c$ cycles such that $L(G)=b$ and $\mathrm{dim}(G)=\mathrm{edim}
(G)=L(G)+2c.$
\end{proposition}
\begin{proof}
For a given pair of integers $b\geq0$ and $c\geq2,$ we construct a cactus
graph $G$ in a following way. Let $G_{0}$ be a graph on $b+2$ vertices, with
one vertex $u$ of degree $b+1$ and all other vertices of degree $1$, i.e.
$G_{0}$ is a star graph. Let $H$ be a graph obtained from the $6$-cycle by
introducing a leaf to it and let $G_{1},\ldots,G_{c}$ be $c$ vertex disjoint
copies of $H$. Denote by $v_{i}$ the only vertex of degree $3$ in $G_{i}.$ Let
$G$ be a graph obtained from $G_{0},G_{1},\ldots,G_{c}$ by connecting them
with an edge $uv_{i}$ for $i=1,\ldots,c$. Obviously, $G$ is a cactus graph
with $c$ cycles and $L(G)=b$. On each of the cycles in $G$ the vertex $v_{i}$
is the only branch-active vertex. If $S\subseteq V(G)$ is a smallest
branch-resolving set in $G$ such that there is a cycle $C_{i}$ in $G$ with
only two $S$-active vertices, then because of the leaf pending on $v_{i}$ the
cycle $C_{i}$ contains either configuration $\mathcal{A}$ if the pair of
$S$-active vertices on $C_{i}$ is an antipodal pair or both configuration
$\mathcal{B}$ and $\mathcal{D}.$ Either way, $S$ is not a vertex nor an edge
metric generator.
On the other hand, the set $S$ consisting of $b$ leaves hanging at $u$ in
$G_{0}$ and a pair of vertices from each $6$-cycle which form a geodesic
triple with $v_{i}$ on the cycle is both a vertex and an edge metric generator
in $G.$ Since $\left\vert S\right\vert =b+2c=L(G)+2c,$ the claims hold.
\end{proof}
\section{An application to zero forcing number}
The results from previous section enable us to solve for cactus graphs a
conjecture posed in literature \cite{Eroh} which involves the vertex metric
dimension, the zero forcing number and the cyclomatic number $c(G)=\left\vert
E(G)\right\vert -\left\vert V(G)\right\vert +1$ (which is sometimes called the
cycle rank number and denoted by $r(G)$) of a graph $G$. Notice that in a
cactus graph $G$ the cyclomatic number $c(G)$ equals the number of cycles in
$G.$ Let us first define the zero forcing number of a graph.
Assuming that every vertex of a graph $G$ is assigned one of two colors, say
black and white, the set of vertices which are initially black is denoted by
$S.$ If there is a black vertex with only one white neighbor, then the
\emph{color-change rule} converts the only white neighbor also to black. This
is one iteration of color-change rule, it can be applied iteratively. A
\emph{zero forcing set} is any set $S\subseteq V(G)$ such that all vertices of
$G$ are colored black after applying the color-change rule finitely many
times. The cardinality of the smallest zero forcing set in a graph $G$ is
called the \emph{zero forcing number} of $G$ and it is denoted by $Z(G).$ In
\cite{Eroh} it was proven that for a unicyclic graph $G$ it holds that
$\mathrm{dim}(G)\leq Z(G)+1$, and it was further conjectured the following.
\begin{conjecture}
\label{Con_zero}For any graph $G$ it holds that $\mathrm{dim}(G)\leq
Z(G)+c(G).$
\end{conjecture}
Moreover, they proved for cacti with even cycles the bound $\mathrm{dim}
(G)\leq Z(G)+c(G).$ We will use our results to prove that for cacti the
tighter bound from the above conjecture holds.
\begin{proposition}
Let $G$ be a cactus graph. Then $\mathrm{dim}(G)\leq Z(G)+c(G)$ and
$\mathrm{edim}(G)\leq Z(G)+c(G).$
\end{proposition}
\begin{proof}
Due to Corollary \ref{Cor_boundB} it is sufficient to prove that
$L(G)+B(G)\leq Z(G).$ Let $S\subseteq V(G)$ be a zero forcing set in $G.$ Let
us first show that $S$ must be a branch-resolving set. Assume the contrary,
i.e. that $S$ is not a branch-resolving set and let $v\in V(G)$ be a vertex of
degree $\geq3$ with at least two $S$-free threads hanging at $v.$ But then $v$
has at least two white neighbors, one on each of the $S$-free threads hanging
at it, which cannot be colored black by $S,$ so $S$ is not a zero forcing set,
a contradiction.
Let $S_{1}\subseteq S$ be a smallest branch-resolving set contained in $S$ and
let $S_{2}=S\backslash S_{1}.$ Obviously, $\left\vert S_{1}\right\vert =L(G)$
and $S_{1}\cap S_{2}=\phi.$ We now wish to prove that $\left\vert
S_{2}\right\vert \geq B(G).$ If $G$ is a tree, then the claim obviously holds,
so let us assume that $G$ contains at least one cycle. Let $C_{i}$ be a cycle
in $G$ such that $b(C_{i})\leq1.$ If $b(C_{i})=0,$ then $G$ is a unicyclic
graph and $C_{i}$ the only cycle in $G$. Since $b(C_{i})=0,$ we have
$S_{1}=\phi,$ so $S_{2}=S.$ Since a zero forcing set in unicyclic graph must
contain at least two vertices, we obtain $\left\vert S_{2}\right\vert
=\left\vert S\right\vert \geq2=B(G)$ and the claim is proven.
Assume now that for every cycle $C_{i}$ with $b(C_{i})\leq1$ it holds that
$b(C_{i})=1.$ Let $v$ be the branch-active vertex on such a cycle $C_{i}$ and
notice that $S_{1}$ can turn only $v$ black on $C_{i}.$ Therefore, in order
for $S$ to be a zero forcing set it follows that $S_{2}$ must contain a vertex
from every such cycle, i.e. $\left\vert S_{2}\right\vert \geq B(G).$
Therefore, $\left\vert S\right\vert =\left\vert S_{1}\right\vert +\left\vert
S_{2}\right\vert \geq L(G)+B(G).$
\end{proof}
The above proposition, besides proving for cacti the cycle rank conjecture
which was posed for $\mathrm{dim}(G),$ also gives a similar result for
$\mathrm{edim}(G).$ So, this motivates us to pose for $\mathrm{edim}(G)$ the
counterpart conjecture of Conjecture \ref{Con_zero}.
\section{Concluding remarks}
In \cite{SedSkreBounds} it was established that for a unicyclic graph $G$ both
vertex and edge metric dimensions are equal to $L(G)+\max\{2-b(G),0\}$ or
$L(G)+\max\{2-b(G),0\}+1.$ In \cite{SedSkreUnicyclic} a characterization under
which both of the dimensions take one of the two possible values was further
established. In this paper we extend the result to cactus graphs where a
similar characterization must hold for every cycle in a graph, and also the
additional characterization for the connection of two cycles must be
introduced. This result enabled us to prove the cycle rank conjecture for
cactus graphs.
Moreover, the results of this paper enabled us to establish a simple upper
bound on the value of the vertex and the edge metric dimension of a cactus
graph $G$ with $c$ cycles
\[
\mathrm{dim}(G)\leq L(G)+2c\quad\hbox{ and }\quad\mathrm{edim}(G)\leq
L(G)+2c.
\]
Since the number of cycles can be generalized to all graphs as the cyclomatic
number $c(G)=\left\vert E(G)\right\vert -\left\vert V(G)\right\vert +1,$ we
conjecture that the analogous bounds hold in general.
\begin{conjecture}
Let $G$ be a connected graph. Then, $\mathrm{dim}(G)\leq L(G)+2c(G).$
\end{conjecture}
\begin{conjecture}
Let $G$ be a connected graph. Then, $\mathrm{edim}(G)\leq L(G)+2c(G).$
\end{conjecture}
In \cite{SedSkrekMixed} it was shown that the inequality $\mathrm{mdim}
(G)<2c(G)$ holds for $3$-connected graphs. Since $\mathrm{dim}(G)\leq
\mathrm{mdim}(G)$ and $\mathrm{edim}(G)\leq\mathrm{mdim}(G),$ the previous two
conjectures obviously hold for $3$-connected graphs.
Also, motivated by the bound on edge metric dimension of cacti involving zero
forcing number, we state the following conjecture for general graphs, as a
counterpart of Conjecture \ref{Con_zero}.
\begin{conjecture}
Let $G$ be a connected graph. Then, $\mathrm{edim}(G)\leq Z(G)+c(G).$
\end{conjecture}
\noindent\textbf{Acknowledgments.}~~Both authors acknowledge partial
support of the Slovenian research agency ARRS program\ P1-0383 and ARRS
projects J1-1692 and J1-8130. The first author also the support of Project
KK.01.1.1.02.0027, a project co-financed by the Croatian Government and the
European Union through the European Regional Development Fund - the
Competitiveness and Cohesion Operational Programme.
\end{document} | math |
\begin{document}
\title{Posterior Consistency for Gaussian Process Approximations of Bayesian Posterior Distributions}
\author{Andrew M. Stuart$^1$, Aretha L. Teckentrup$^1$}
\date{}
\maketitle
\noindent
$^1$ Mathematics Institute, Zeeman Building, University of Warwick, Coventry, CV4 7AL, England. \texttt{a.m.stuart@warwick.ac.uk, a.teckentrup@warwick.ac.uk}
\begin{abstract}
We {study} the use of Gaussian process emulators to approximate the parameter-to-observation map or the negative log-likelihood in Bayesian inverse problems. We prove error bounds on the Hellinger distance between the true posterior distribution and various approximations based on the Gaussian process emulator. Our analysis includes approximations based on the mean of the predictive process, as well as approximations based on the full Gaussian process emulator. Our results show that the Hellinger distance between the true posterior and its approximations can be bounded by moments of the error in the emulator. Numerical results confirm our theoretical findings.
\end{abstract}
{\em Keywords}: inverse problem, Bayesian approach, surrogate model, Gaussian process regression, posterior consistency
{\em AMS 2010 subject classifications}: 60G15, 62G08, 65D05, 65D30, 65J22
\section{Introduction}
{Given a} {mathematical model} of a physical process, we are interested in the inverse problem of determining the inputs to the model given some noisy observations related to the model outputs. Adopting a Bayesian {approach
\cite{kaipio2005statistical,stuart10}}, we incorporate our prior knowledge of the inputs into a probability distribution, referred to as the {\em prior distribution}, and obtain a more accurate representation of the model inputs in the {\em posterior distribution}, which results from conditioning the prior distribution on the observations.
Since the posterior distribution is generally intractable, sampling methods such as Markov chain Monte Carlo (MCMC) \cite{hastings70,mrrtt53,robert_casella,cmps14,gc11,crsw13} are typically used {to explore it}. A major challenge in the application of MCMC methods to problems of practical interest is the large computational cost associated with numerically solving the mathematical model for a given set of the input parameters. Since the generation of each sample by the MCMC method requires a solve of the governing equations, and often millions of samples are required, this process can quickly become very costly.
{This drawback of fully Bayesian inference for complex models was
recognised several decades ago in the statistics literature, and
resulted in key papers which had a profound influence on methodology
\cite{sacks1989design,kennedy2001bayesian,o2006bayesian}.
These papers advocated
the use of a Gaussian process surrogate model to approximate the solution of the
governing equations, and in particular the likelihood,
at a much lower computational cost. This approximation then results in an approximate posterior distribution, which can be sampled more cheaply using MCMC.
However, despite the widespread adoption of the methodology, there has been
little analysis of the effect of the approximation on posterior
inference. In this work, we study this issue, focussing on the use of Gaussian process emulators \cite{rasmussen_williams,stein,sacks1989design,kennedy2001bayesian,o2006bayesian,brsrwm08,hkccr04} as surrogate models.
Other choices of surrogate models such as those described in \cite{bwg08,akksstv06}, generalised Polynomial Chaos \cite{xk03,mnr07}, sparse grid collocation \cite{bnt10,mx09} and
adaptive subspace methods \cite{constantine2014active,constantine2015active}
might also be studied similarly, but are not considered here. Indeed
we note that the paper \cite{mx09} studied the effect, on the
posterior distribution, of stochastic collocation approximation within
the forward model and was one of the first papers to address such questions.
That paper used the Kullback-Leibler divergence, or relative entropy,
to measure the effect on the posterior, and considered finite dimensional
input parameter spaces.
}
{ The main focus of this work is to analyse the error introduced in the posterior distribution by using a Gaussian process emulator as a surrogate model. The error is measured in the Hellinger distance, which {is shown in \cite{stuart10,ds15} to be a suitable metric for evaluation of perturbations to the posterior measure in Bayesian inverse problems, including problems with infinite dimensional input parameter spaces. We consider emulating either the parameter-to-observation map or the negative log-likelihood.} The convergence results presented in this paper are of two types. In section \ref{sec:gp}, we present convergence results for simple Gaussian process emulators applied to a general function $f$ satisfying suitable regularity assumptions. In section \ref{sec:gp_app}, we prove bounds on the error in the posterior distribution in terms of the error in the Gaussian process emulator. The novel contributions of this work are mainly in section \ref{sec:gp_app}. The results in the two sections can be combined to give a final error estimate for the simple Gaussian process emulators presented in section \ref{sec:gp}. However, the error bounds derived in section \ref{sec:gp_app} are much more general in the sense that they apply to any Gaussian process emulator satisfying the required assumptions. A short discussion on extensions of this work related to Gaussian process emulators used in practice is included in the conclusions in section \ref{sec:conc}.}
{ We study three different approximations to the posterior distribution. Firstly, we consider using the mean of the Gaussian process emulator as a surrogate model, resulting in a deterministic approximation to the posterior distribution. Our second approximation is obtained by using the full Gaussian process as a surrogate model, leading to a random approximation in which case we study the second moment of the Hellinger distance between the true and the approximate posterior distribution. The uncertainty in the posterior distribution introduced in this way can be thought of representing the uncertainty in the emulator due to the finite number of function evaluations used to construct it. This uncertainty can in applications be large (or comparable) to the uncertainty present in the observations, and a user may want to take this into account to "inflate" the variance of the posterior distribution. Finally, we construct an alternative deterministic approximation by using the full Gaussian process as surrogate model, and taking the expected value (with respect to the distribution of the surrogate) of the likelihood. It can be shown that this approximation of the likelihood is optimal in the sense that it minimises the $L^2$-error \cite{sn16}. In contrast to the approximation based on only the mean of the emulator, this approximation also takes into account the uncertainty of the emulator, although only in an averaged sense.
}
For the three approximations discussed above, we show that the Hellinger distance between the true and approximate posterior distribution can be bounded by the error between the true parameter-to-observation map (or log-likelihood) and its Gaussian process approximation, measured in a norm that depends on the approximation considered. {Our analysis is restricted to finite dimensional input spaces. This
reflects the state-of-the-art with respect to Gaussian process
emulation itself; the analysis of the effect on the posterior is
less sensitive to dimension.} {For simplicity, we also restrict our attention to bounded parameters, i.e. parameters in a compact subset of $\mathbb{R}^K$ for some $K \in \mathbb{N}$, and to problems where the parameter-to-observation map is uniformly bounded.}
{ The convergence results on Gaussian process regression presented in section \ref{sec:gp} are mainly known results from the theory of scattered data interpolation \cite{wendland,sss13,nww06}. The error bounds are given in terms of the fill distance of the design points used to construct the Gaussian process emulator, and depend in several ways on the number $K$ of input parameters we want to infer. Firstly, when looking at the error in terms of the number of design points used, rather than the fill distance of these points, the rate of convergence typically deteriorates with the number of parameters $K$. Secondly, the proof of these error estimates requires assumptions on the smoothness of the function being emulated, where the precise smoothness requirements depend on the Gaussian process emulator employed. For emulators based on Mat\`ern kernels \cite{matern}, we require these maps to be in a Sobolev space $H^s$, where $s > K/2$. We would like to point out here that it is not necessary for the function being emulated to be in the {\em reproducing kernel Hilbert space} (or {\em native space}) of the Mat\`ern kernel used in order to prove convergence (cf Proposition \ref{prop:mean_conv_int}), but that is suffices to be in a larger Sobolev space in which point evaluations are bounded linear functionals.
}
The remainder of this paper is organised as follows. In section \ref{sec:inv}, we set up the Bayesian inverse problem of interest. We then recall some results on Gaussian process regression in section \ref{sec:gp}. The heart of the paper is section \ref{sec:gp_app}, where we introduce the different approximations to the posterior and perform an error analysis. Our theoretical results are confirmed on a simple model problem in section \ref{sec:num}, and some conclusions are finally given in section \ref{sec:conc}.
\section{Bayesian Inverse Problems}\label{sec:inv}
Let $X$ and $V$ be separable Banach spaces, and define the measurable mappings $G: X \rightarrow V$ and $\mathcal O : V \rightarrow \mathbb R^J$, for some $J \in \mathbb N$. Denote by $\mathcal G: X \rightarrow \mathbb R^J$ the composition of $\mathcal O$ and $G$. We refer to $G$ as the {\em forward map}, to $\mathcal O$ as the {\em observation operator} and to $\mathcal G$ as the {\em parameter-to-observation map}. We denote by $\|\cdot\|$ the Euclidean norm on $\mathbb R^n$, for $n \in \mathbb{N}$.
We consider the setting where the Banach space $X$ is a compact subset of $\mathbb{R}^K$, for some finite $K \in \mathbb{N}$, representing the range of a finite number $K$ of parameters $u$.
The inverse problem of interest is to determine the parameters $u \in X$ from the noisy data $y \in \mathbb{R}^J$ given by
\begin{equation*}
y = \mathcal G(u) + \eta,
\end{equation*}
where the noise $\eta$ is a realisation of the $\mathbb R^J$-valued Gaussian random variable $\mathcal N(0,\sigma_\eta^2 I)$, for some known variance $\sigma_\eta^2$.
We adopt a Bayesian perspective in which, in the absence of data, $u$ is distributed according to a prior measure $\mu_0$. We are interested in the posterior distribution $\mu^y$ on the conditioned random variable $u | y$, which can be characterised as follows.
\begin{proposition} (\cite{stuart10}) Suppose $\mathcal G : X \rightarrow \mathbb{R}^J$ is continuous and $\mu_0(X) = 1$. Then the posterior distribution $\mu^y$ on the conditioned random variable $u | y$ is absolutely continuous with respect to $\mu_0$ and given by Bayes' Theorem:
\begin{equation*}\label{eq:rad_nik}
\frac{d\mu^y}{d\mu_0}(u) = \frac{1}{Z} \exp\big(-\Phi(u)\big),
\end{equation*}
where
\begin{equation}\label{eq:def_like}
\Phi(u) = \frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2 \quad \text{and } \qquad Z = \mathbb{E}_{\mu_0}\Big(\exp\big(-\Phi(u)\big)\Big).
\end{equation}
\end{proposition}
We make the following assumption on the regularity of the parameter-to-observation map $\mathcal G$.
\begin{assumption}\label{ass:reg} {We assume that}
$\mathcal G : X \rightarrow \mathbb{R}^J$ satisfies $\mathcal G \in H^s(X; \mathbb{R}^J)$, for some $s > K/2$, {and that} $\sup_{u \in X} \|\mathcal G(u)\| =: C_\mathcal G < \infty$.
\end{assumption}
Under Assumption \ref{ass:reg}, it follows that the negative log-likelihood $\Phi : X \rightarrow \mathbb{R}$ satisfies $\Phi \in H^s(X)$, and $\sup_{u \in X} |\Phi(u)| =: C_\Phi < \infty$. Since $s > K/2$, the Sobolev Embedding Theorem furthermore implies that $\mathcal G$ and $\Phi$ are continuous.
Examples of model problems satisfying Assumption \ref{ass:reg} include linear elliptic and parabolic partial differential equations \cite{cohen2011analytic,ss14} and non-linear ordinary differential equations \cite{walter,hs13}. A specific example is given in section \ref{sec:num}.
{ Note that in Assumption \ref{ass:reg}, the smoothness requirement on $\mathcal G$ becomes stronger as $K$ increases. The reason for this is that in order to apply the results in section \ref{sec:gp}, we require $\mathcal G$ to be in a Sobolev space in which point evaluations are bounded linear functionals. The second part of Assumption \ref{ass:reg} is mainly included to define the constant $C_{\mathcal G}$, since the fact that $\sup_{u \in X} \|\mathcal G(u)\|$ is finite follows from the continuity of $\mathcal G$ and the compactness of $X$.}
\section{Gaussian {Process Regression}}\label{sec:gp}
We are interested in using Gaussian process regression to build a surrogate model for the forward map, leading to an approximate Bayesian posterior distribution that is computationally cheaper to evaluate. Generally speaking, Gaussian process regression (or Gaussian process emulation, or kriging) is a way of building an approximation to a function $f$, based on a finite number of evaluations of $f$ at a chosen set of {design points}. We will here consider emulation of either the parameter-to-observation map $\mathcal G: X \rightarrow \mathbb{R}^J$ or the negative log-likelihood $\Phi:X \rightarrow \mathbb{R}$. Since the efficient emulation of vector-valued functions is still an open question \cite{bzkl13}, we will focus on the emulation of scalar valued functions. An emulator of $\mathcal G$ in the case $J > 1$ is constructed by emulating each entry independently.
Let now $f : X \rightarrow \mathbb{R}$ be an arbitrary function. Gaussian process emulation is in fact a Bayesian procedure, and the starting point is to put a Gaussian process prior on the function $f$. In other words, we model $f$ as
\begin{equation}\label{eq:gp}
{f_0} \sim \text{GP}(m(u), k(u,u')),
\end{equation}
with known mean $m : X \rightarrow \mathbb{R}$ and two point covariance function $k : X \times X \rightarrow \mathbb{R}$, {assumed to be positive-definite.} Here, we use the Gaussian process notation as in, for example, \cite{rasmussen_williams}. In the notation of \cite{stuart10}, we have ${f_0} \sim \mathcal N(m,C)$, where $m=m(\cdot)$ and $C$ is the integral operator with covariance function $k$ as kernel.
Typical choices of the mean function $m$ include the zero function and polynomials \cite{rasmussen_williams}.
A family of covariance functions $k$ frequently used in applications are the Mat\`ern covariance functions \cite{matern}, given by
\begin{equation}\label{eq:mat_cov}
k_{\nu,\lambda,\sigma_k^2}(u,u') = \sigma_k^2 \, \frac{1}{\Gamma(\nu) 2^{\nu-1}} \left(\sqrt{2\nu} \frac{\|u-u'\|}{\lambda}\right)^\nu B_\nu\left(\sqrt{2\nu} \frac{\|u-u'\|}{\lambda}\right) ,
\end{equation}
where $\Gamma$ denotes the Gamma function, $B_\nu$ denotes the modified Bessel function of the second kind and $\nu, \lambda$ and $\sigma_k^2$ are positive parameters. The parameter $\lambda$ is referred to as the {\em correlation length}, and governs the length scale at which ${f_0}(u)$ and ${f_0}(u')$ are correlated. The parameter $\sigma_k^2$ is referred to as the {\em variance}, and governs the magnitude of ${f_0}(u)$. Finally, the parameter $\nu$ is referred to as the {\em smoothness parameter}, and governs the regularity of ${f_0}$ as a function of $u$.
As the limit when $\nu \rightarrow \infty$, we obtain the Gaussian covariance
\begin{equation}\label{eq:gauss_cov}
k_{\infty,\lambda,\sigma_k^2}(u,u') = \sigma_k^2 \exp \left(-\frac{\|u-u'\|^2}{2 \lambda^2}\right).
\end{equation}
Now suppose we are given data in the form of a set of distinct {\em design points} $U := \{u^n\}_{n=1}^N \subseteq X$, together with corresponding function values
\begin{equation}\label{eq:data_exact}
f(U) := [f(u^1), \dots, f(u^N)] \in \mathbb{R}^N.
\end{equation}
{Since $f_0$ is a Gaussian process, the vector $[f_0(u^1), \dots, f_0(u^N), f_0(\tilde u^1), \dots, f_0(\tilde u^M)] \in R^{N+M}$, for any set of test points $\{ \tilde u^m\}_{m=1}^M \subseteq X \setminus U$, follows a multivariate Gaussian distribution. The conditional distribution of $f_0(\tilde u^1), \dots, f_0(\tilde u^M)$, given the values $f_0(u^1) = f(u^1), \dots, f_0(u^N) = f(u^N)$, is then again Gaussian, with mean and covariance given by the standard formulas for the conditioning of Gaussian random variables \cite{rasmussen_williams}.}
Conditioning the Gaussian process \eqref{eq:gp} on the known values $f(U)$, we hence obtain another Gaussian process $f_N$, known as the {\em predictive process}. We have
\begin{equation}\label{eq:gp_pred}
f_N \sim \text{GP}(m^f_N(u), k_N(u,u')),
\end{equation}
where the predictive mean $m^f_N : X \rightarrow \mathbb{R}$ and predictive covariance $k_N : X \times X \rightarrow \mathbb{R}$ are known explicitly, and depend on the modelling choices made in \eqref{eq:gp}. In the following discussion, we will focus on the popular choice $m \equiv 0$; the case of a non-zero mean is discussed in Remark \ref{rem:mean}.
When $m \equiv 0$, we have
\begin{align}\label{eq:pred_eq}
m_N^f(u) = k(u,U)^T K(U,U)^{-1} f(U), \qquad
k_N(u,u') = k(u,u') - k(u,U)^T K(U,U)^{-1} k(u',U),
\end{align}
where $k(u,U) = [k(u,u^1), \dots, k(u,u^N)] \in \mathbb{R}^{N}$ and $K(U,U) \in \mathbb{R}^{N \times N}$ is the matrix with $ij^\mathrm{th}$ entry equal to $k(u^i,u^j)$ \cite{rasmussen_williams}.
There are several points to note about the predictive mean $m_N^f$ in \eqref{eq:pred_eq}. Firstly, $m_N^f$ is a linear combination of the function evaluations $f(U)$, and hence a linear predictor. It is in fact the {\em best linear predictor} \cite{stein}, in the sense that it is the linear predictor with the smallest mean square error. Secondly, $m_N^f$ interpolates the function $f$ at the design points $U$, since the vector $k(u^n,U)$ is the $n^\mathrm{th}$ row of the matrix $K(U,U)$. In other words, we have $m_N^f(u^n) = f(u^n)$, for all $n=1,\dots,N$.
Finally, we remark that $m_N^f$ is a linear combination of kernel evaluations,
\begin{equation*}
m_N^f(u) = \sum_{n=1}^N \alpha_n k(u,u^n),
\end{equation*}
where the vector of coefficients is given by $\alpha = K(U,U)^{-1} f(U)$.
Concerning the predictive covariance $k_N$, we note that $k_N(u,u) < k(u,u)$ for all $u \in X$, since $K(U,U)^{-1}$ is positive definite. Furthermore, we also note that $k_N(u^n,u^n) = 0$, for $n=1, \dots, N$, since $k(u^n,U)^T \; K(U,U)^{-1} \; k(u^n,U) = k(u^n,u^n)$.
For stationary covariance functions $k(u,u') = k(\|u-u'\|)$, the predictive mean is a radial basis functions interpolant of $f$, and we can make use of results from the radial basis function literature to investigate the behaviour of $m_N^f$ and $k_N$ as $N \rightarrow \infty$. Before we do this, {in
subsection \ref{ssec:gp_sk}, we recall some results on native spaces
(also know as reproducing kernel Hilbert spaces) in subsection \ref{ssec:gp_native}.}
\subsection{Native spaces of Mat\`ern kernels}\label{ssec:gp_native}
We recall the notion of the reproducing kernel Hilbert space corresponding to the kernel $k$, usually referred to as the native space of $k$ in the radial basis function literature.
\begin{definition}\label{def:rkhs} A Hilbert space $H_k$ of functions $f: X \rightarrow \mathbb{R}$, with inner product $\langle \cdot, \cdot \rangle_{H_k}$, is called the {\em reproducing kernel Hilbert space (RKHS)} corresponding to a symmetric, positive definite kernel $k : X \times X \rightarrow \mathbb{R}$ if
\begin{itemize}
\item[i)] for all $u \in X$, $k(u, u')$, as a function of its second argument, belongs to $H_k$,
\item[ii)] for all $u \in X$ and $f \in H_k$, $\langle f, k(u, \cdot) \rangle_{H_k} = f(u)$.
\end{itemize}
\end{definition}
By the Moore-Aronszajn Theorem \cite{aronszajn50}, a unique RKHS exists for each symmetric, positive definite kernel $k$. Furthermore, this space can be constructed using Mercer's Theorem \cite{mercer09}, and it is equal to the Cameron-Martin space \cite{bogachev} of the covariance operator $C$ with kernel $k$.
For covariance kernels of Mat\`ern type, the native space is isomorphic to a Sobolev space \cite{wendland,sss13}.
\begin{proposition}\label{prop:native_matern} Let $k_{\nu,\lambda,\sigma_k^2}$ be a Mat\`ern covariance kernel as defined in \eqref{eq:mat_cov}. Then the native space $H_{k_{\nu,\lambda,\sigma_k^2}}$ is equal to the Sobolev space $H^{\nu+K/2}(X)$ as a vector space, and the native space norm and the Sobolev norm are equivalent.
\end{proposition}
Native spaces for more general kernels, including non-stationary kernels, are analysed in \cite{wendland}. For stationary kernels, the native space can generally be characterised by the rate of decay of the Fourier transform of the kernel. The native space of the Gaussian kernel \eqref{eq:gauss_cov}, for example, consists of functions whose Fourier transform decays exponentially, and is hence strictly contained in the space of analytic functions.
Proposition \ref{prop:native_matern} shows that as a vector space, the native space of the Mat\`ern kernel $k_{\nu,\lambda,\sigma_k^2}$ is fully determined by the smoothness parameter $\nu$. The parameters $\lambda$ and $\sigma_k^2$ do, however, influence the constants in the norm equivalence of the native space norm and the standard Sobolev norm.
\subsection{Radial basis function interpolation}\label{ssec:gp_sk}
For stationary covariance functions $k(u,u') = k(\|u-u'\|)$, the predictive mean is a radial basis functions interpolant of $f$. In fact, it is the minimum norm interpolant \cite{rasmussen_williams},
\begin{equation}\label{eq:mean_min}
m_N^f = \argmin_{g \in H_k \; : \; g(U) = f(U)} \|g\|_{H_k}.
\end{equation}
Given the set of design points $U = \{u^n\}_{n=1}^N \subseteq X$, we define the {fill distance} $h_U$, {separation radius} $q_U$ and {mesh ratio} $\rho_U$ by
\begin{equation*}
h_{U} := \sup_{u \in X} \inf_{u^n \in U} \|u-u^n\|, \qquad q_U := \frac{1}{2} \min_{i \neq j} \|u^j - u^i\|, \qquad \rho_U := \frac{h_U}{q_U} \geq 1.
\end{equation*}
The fill distance is the maximum distance any point in $X$ can be from $U$, and the separation radius is half the smallest distance between any two distinct points in $U$. The mesh ratio provides a measure of how uniformly the design points $U$ are distributed in X.
We have the following theorem on the convergence of $m_N^f$ to $f$ \cite{wendland,nww05,nww06}.
\begin{proposition}\label{prop:mean_conv} Suppose $X \subseteq \mathbb R^K$ is a bounded, Lipschitz domain that satisfies an interior cone condition, and the symmetric positive definite kernel $k$ is such that $H_k$ is isomorphic to the Sobolev space $H^\tau(X)$, with $\tau = n + r$, $n \in \mathbb N$, $n > K/2$ and $0 \leq r < 1$. Suppose $m_N^f$ is given by \eqref{eq:pred_eq}. If $f \in H^\tau(X)$, then there exists a constant $C$, independent of $f$, $U$ and $N$, such that
\[
\| f - m_N^f\|_{H^\beta(X)} \leq C h_U^{\tau - \beta} \|f\|_{H^\tau(X)}, \qquad \text{for any } \beta \leq \tau,
\]
for all sets $U$ with $h_U$ sufficiently small.
\end{proposition}
Proposition \ref{prop:mean_conv} assumes that the function $f$ is in the RKHS of the kernel $k$. Convergence estimates for a wider class of functions can be obtained using interpolation in Sobolev spaces \cite{nww06}.
\begin{proposition}\label{prop:mean_conv_int} Suppose $X \subseteq \mathbb R^K$ is a bounded, Lipschitz domain that satisfies an interior cone condition, and the symmetric positive definite kernel $k$ is such that $H_k$ is isomorphic to the Sobolev space $H^\tau(X)$. Suppose $m_N^f$ is given by \eqref{eq:pred_eq}. If $f \in H^{\tilde \tau}(X)$, for some $\tilde \tau \leq \tau$, $\tilde \tau = n + r$, $n \in \mathbb N$, $n > K/2$ and $0 \leq r < 1$, then there exists a constant $C$, independent of $f$, $U$ and $N$, such that
\[
\| f - m_N^f\|_{H^\beta(X)} \leq C h_U^{\tilde \tau - \beta} \rho_U^{\tau - \beta}\|f\|_{H^{\tilde \tau}(X)}, \qquad \text{for any } \beta \leq \tilde \tau,
\]
for all sets $U$ with $h_U$ and $\rho_U$ sufficiently small.
\end{proposition}
{ We would like to point out here that in practice, it is much more informative to obtain convergence rates in terms of the number of design points $N$ rather than their associated fill distance $h_U$. This is of course possible in general, but the precise relation between $N$ and $h_U$ will depend on the specific choice of design points $U$. For uniform tensor grids $U$, the fill distance $h_U$ is of the order $N^{-1/K}$ (cf section \ref{sec:num}). This suggests a strong dependence on the input dimension $K$ of the convergence rate in terms of the number of design points $N$.}
Convergence of the predictive variance $k_N(u,u)$ follows under the assumptions of Proposition \ref{prop:mean_conv} or Proposition \ref{prop:mean_conv_int} using the relation in Proposition \ref{prop:predvar_sup} below. This was already noted, {without proof,} in \cite{sss13};
we give a proof here for completeness.
\begin{proposition}\label{prop:predvar_sup} Suppose $m_N^f$ and $k_N$ are given by \eqref{eq:pred_eq}. Then
\[
k_N(u,u)^{\frac{1}{2}} = \sup_{\|g\|_{H_k}=1} | g(u) - m^g_N(u)|.
\]
\end{proposition}
\begin{proof} For any $u \in X$, we have
\begin{align*}
\sup_{\|g\|_{H_k}=1} | g(u) - m^g_N(u)| &= \sup_{\|g\|_{H_k}=1} \Big| g(u) - \sum_{j=1}^N (k(u,U)^T K(U,U)^{-1} )_j g(u^j) \Big| \\
&= \sup_{\|g\|_{H_k}=1} \Big|\left \langle g, k(\cdot,u) \right\rangle_{H_k} - \sum_{j=1}^N (k(u,U)^T K(U,U)^{-1} )_j \langle g, k(\cdot,u^j) \rangle_{H_k} \Big| \\
&= \sup_{\|g\|_{H_k}=1} \Big|\langle g, k(\cdot,u) - \sum_{j=1}^N (k(u,U)^T K(U,U)^{-1} )_j k(\cdot,u^j) \rangle_{H_k} \Big| \\
&= \| k(\cdot,u) - k(\cdot,U)^T K(U,U)^{-1} k(u,U)\|_{H_k}.
\end{align*}
The final equality follows from the Cauchy-Schwarz inequality, which becomes an equality when the two functions considered are linearly dependent.
By Definition \ref{def:rkhs}, we then have
\begin{align*}
&\| k(\cdot,u) - k(\cdot,U)^T K(U,U)^{-1} k(u,U)\|_{H_k}^2 \\
& \qquad = \langle k(\cdot,u) - k(\cdot,U)^T K(U,U)^{-1} k(u,U), k(\cdot,u) - k(\cdot,U)^T K(U,U)^{-1} k(u,U) \rangle_{H_k} \\
& \qquad = \langle k(\cdot,u) , k(\cdot,u) \rangle_{H_k} - 2 \langle k(\cdot,u) , k(\cdot,U)^T K(U,U)^{-1} k(u,U)\rangle_{H_k} \\
& \qquad \qquad + \langle k(\cdot,U)^T K(U,U)^{-1} k(u,U), k(\cdot,U)^T K(U,U)^{-1} k(u,U) \rangle_{H_k} \\
& \qquad = k(u,u) - 2 k(u,U)^T K(U,U)^{-1} k(u,U) + k(u,U)^T K(U,U)^{-1} k(u,U) \\
&\qquad = k_N(u,u).
\end{align*}
{The identity which leads to the third term in the penultimate line
uses the fact that $\langle k(\cdot,u'), k(\cdot,u) \rangle_{H_k} = k(u,u')$,
for any $u,u' \in X.$ If $\ell(u)=K(U,U)^{-1}k(u,U)$ then
\begin{align*}
\langle k(\cdot,U)^T K(U,U)^{-1} k(u,U), k(\cdot,U)^T K(U,U)^{-1} k(u,U) \rangle_{H_k}&=\sum_{j,k} \ell_j(u) \langle k(\cdot,u^j),k(\cdot,u^k)\rangle_{H_k}\ell_k(u)\\
&= \sum_{j,k} \ell_j(u) k(u^j,u^k)\ell_k(u)\\
&=\ell(u)^TK(U,U) \ell(u)\\
&= k(u,U)^T K(U,U)^{-1} k(u,U)
\end{align*}
as required.}
This completes the proof.
\end{proof}
{The second string of equalities, appearing in the middle part of
the proof Proposition \ref{prop:predvar_sup}, might appear counter-intuitive at first glance in that the left-most
quantity is a norm squared of quantities which scale like $k$, whilst the
right-most quantity scales like $k$ itself. However, the space $H_k$ itself
depends on the kernel $k$, and scales inversely proportional to $k$,
explaining that the identity is indeed dimensionally correct.}
\begin{remark}{\em ({\em Exponential convergence for the Gaussian kernel}) The RKHS corresponding to the Gaussian kernel \eqref{eq:gauss_cov} is no longer isomorphic to a Sobolev space; it is contained in $H^\tau(X)$, for any $\tau < \infty$. For functions $f$ in this RKHS, Gaussian process regression with the Gaussian kernel converges exponentially in the fill distance $h_U$. For more details, see \cite{wendland}.}
\end{remark}
\begin{remark}\label{rem:mean}{\em ({\em Regression with non-zero mean}) If in \eqref{eq:gp} we use a non-zero mean $m(\cdot)$, the formula for the predictive mean $m_N^f$ changes to
\begin{equation}\label{eq:pred_eq_mean}
m_N^f(u) = m(u) + k(u,U)^T K(U,U)^{-1} (f(U) - m(U)),
\end{equation}
where $m(U) := [m(u^1), \dots, m(u^N)] \in \mathbb{R}^N$. The predictive covariance $k_N(u,u')$ is as in \eqref{eq:pred_eq}. As in the case $m \equiv 0$, we have $m_N^f(u^n) = f(u^n)$, for $n=1, \dots, N$, and $m_N^f$ is an interpolant of $f$. If $m \in H_k$, then $m_N^f$ given by \eqref{eq:pred_eq_mean} is also in $H_k$, and the proof techniques in \cite{nww05,nww06} can be applied. The conclusions of Propositions \ref{prop:mean_conv} and \ref{prop:mean_conv_int} then hold, with the factor $\|f\|$ in the error bounds replaced by $\|f\| + \|m\|$.
}
\end{remark}
\section{Approximation of the Bayesian posterior distribution}\label{sec:gp_app}
In this {section}, we analyse the error introduced in the posterior distribution $\mu^y$ when we use a Gaussian process emulator to approximate the parameter-to-observation map $\mathcal G$ or the negative log-likelihood $\Phi$. The aim is to show convergence, in a suitable sense, of the approximate posterior distributions to the true posterior distribution as the number of observations $N$ tends to infinity. For a given approximation $\mu^{y,N}$ of the posterior distribution {$\mu^y$}, we will focus on bounding the Hellinger distance \cite{stuart10} between the two distributions, which is defined as
\[
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N}) = \left( \frac{1}{2} \Large{\int}_{X} \left(\sqrt{\frac{d \mu^y}{d \mu_0}} - \sqrt{\frac{d \mu^{y,N}}{d \mu_0}} \right)^2 d \mu_0 \right)^{1/2}.
\]
As proven in \cite[Lemma 6.12 and 6.14]{ds15}, the Hellinger distance provides a bound for the Total Variation distance
\[
d_{\mbox {\tiny{\rm TV}}}(\mu^y, \mu^{y,N}) = \frac{1}{2} \sup_{\|f\|_\infty \leq 1} \left| \mathbb{E}_{\mu^y}(f) - \mathbb{E}_{\mu^{y,N}}(f) \right| \leq \sqrt{2} \; d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N}),
\]
and for $f \in L^2_{\mu^y}(X) \cap L^2_{\mu^{y,N}}(X)$, the Hellinger distance also provides a bound on the error in expected values
\[
\left| \mathbb{E}_{\mu^y}(f) - \mathbb{E}_{\mu^{y,N}}(f) \right| \leq 2 (\mathbb{E}_{\mu^y}(f^2) + \mathbb{E}_{\mu^{y,N}}(f^2))^{1/2} \; d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N}).
\]
Depending on how we make use of the predictive process $\mathcal G_N$ or $\Phi_N$ to approximate the Radon-Nikodym derivative $\frac{\mathrm d \mu^y}{\mathrm d \mu_0}$, we obtain different approximations to the posterior distribution $\mu^y$. We will distinguish between approximations based solely on the predictive mean, and approximations that make use of the full predictive process.
\subsection{Approximation based on the predictive mean}
Using simply the predictive mean of a Gaussian process emulator of the parameter-to-observation map $\mathcal G$ or the negative log-likelihood $\Phi$, we can define the approximations $\mu^{y,N, \mathcal G}_\mathrm{mean}$ and $\mu^{y,N, \Phi}_\mathrm{mean}$, given by
\begin{align*}
\frac{d\mu^{y,N, \mathcal G}_\mathrm{mean}}{d\mu_0}(u) &= \frac{1}{Z_{N, \mathcal G}^\mathrm{mean}} \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big), \\
Z_{N, \mathcal G}^\mathrm{mean} &= \mathbb{E}_{\mu_0}\Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big)\Big), \\
\frac{d\mu^{y,N, \Phi}_\mathrm{mean}}{d\mu_0}(u) &= \frac{1}{Z_{N, \Phi}^\mathrm{mean}} \exp\big(- m_N^\Phi(u)\big), \\
Z_{N, \Phi}^\mathrm{mean} &= \mathbb{E}_{\mu_0}\Big(\exp\big(-m_N^\Phi(u)\big)\Big),
\end{align*}
{where $m_N^\mathcal G(u) = [m_N^{\mathcal G^1}(u), \dots, m_N^{\mathcal G^J}(u)] \in \mathbb{R}^J$.} We have the following {lemma concerning} the normalising constants $Z_{N, \mathcal G}^\mathrm{mean}$ and $Z_{N, \Phi}^\mathrm{mean}$, {which is followed
by the main Theorem \ref{thm:hell_mean} and Corollary \ref{cor:rate_mean}
concerning the
approximations $\mu^{y,N, \mathcal G}_\mathrm{mean}, \mu^{y,N, \Phi}_\mathrm{mean}.$}
\begin{lemma}\label{thm:bound_zmean} Suppose $\sup_{u \in X} \| \mathcal G(u) - m^\mathcal G _N(u) \|$ and $\sup_{u \in X} | \Phi(u) - m^\Phi_N(u) |$ converge to 0 as $N$ tends to $\infty$, and assume $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G$. Then there exist positive constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\[
C_1 \leq Z_{N, \mathcal G}^\mathrm{mean} \leq 1 \qquad \text{and} \qquad {C_2^{-1}} \leq Z_{N, \Phi}^\mathrm{mean} \leq {C_2}.
\]
\end{lemma}
\begin{proof}
Let us first consider $Z_{N, \mathcal G}^\mathrm{mean}$. The upper bound follows from a straight forward calculation, since the potential $\frac{1}{2 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2$ is non-negative:
\[
Z_{N, \mathcal G}^\mathrm{mean} = \mathbb{E}_{\mu_0}\Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big)\Big) \leq \mathbb{E}_{\mu_0}(1) = 1.
\]
For the lower bound, we have
\[
Z_{N, \mathcal G}^\mathrm{mean} \geq \mathbb{E}_{\mu_0} \Big(\exp\big(- \frac{1}{2 \sigma_\eta^2} \; \sup_{u \in X} \left\| y - m^\mathcal G _N(u) \right\|^2\big)\Big) = \exp\big(- \frac{1}{2 \sigma_\eta^2} \; \sup_{u \in X} \left\| y - m^\mathcal G _N(u) \right\|^2\big),
\]
since $\int_X \mu_0(\mathrm d u) = 1$. Using the triangle inequality, the assumption $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G$ and the fact that every convergent sequence is bounded, we have
\begin{equation}\label{eq:bound_lemzmean}
\sup_{u \in X} \left\| y - m^\mathcal G _N(u) \right\|^2 \leq \sup_{u \in X} \| y - \mathcal G(u) \|^2 + \sup_{u \in X} \left\| \mathcal G(u) - m^\mathcal G _N(u) \right\|^2 =: - \ln C_1,
\end{equation}
where $C_1$ is independent of $U$ and $N$.
The proof for $Z_{N, \Phi}^\mathrm{mean}$ is similar. For the upper bound, we use $\int_X \mu_0(\mathrm d u) = 1$ and the triangle inequality to derive
\[
Z_{N, \Phi}^\mathrm{mean} \leq \sup_{u\in X} \exp\big(-m_N^\Phi(u)\big) \leq \exp\big(\sup_{u \in X} |m_N^\Phi(u)|\big) \leq \exp\big( \sup_{u\in X} |\Phi(u)| + \sup_{u\in X} |\Phi(u) - m_N^\Phi(u)| \big).
\]
Since $\sup_{u \in X}|\Phi(u)|$ is bounded when $\sup_{u \in X}\|\mathcal G(u)\|$ is bounded, the fact that every convergent sequence is bounded again gives
\[
\sup_{u\in X} |\Phi(u)| + \sup_{u\in X} |\Phi(u) - m_N^\Phi(u)| =: - \ln C_2,
\]
for a constant $C_2$ independent of $U$ and $N$. For the lower bound, we note that since $\int_X \mu_0(\mathrm d u) = 1$,
\[
Z_{N, \Phi}^\mathrm{mean} \geq \mathbb{E}_{\mu_0}\Big(\exp\big(-\sup_{u \in X} |m_N^\Phi(u)|\big)\Big) = \exp\big(-\sup_{u \in X} |m_N^\Phi(u)|\big) \geq C_2^{-1}.
\]
\end{proof}
{ We would like to point out here that the assumptions in Lemma \ref{thm:bound_zmean} can be relaxed to assuming that the sequences $\sup_{u \in X} \| \mathcal G(u) - m^\mathcal G _N(u) \|$ and $\sup_{u \in X} | \Phi(u) - m^\Phi_N(u) |$ are bounded, since this is sufficient to prove the result.}
{We may now prove the desired theorem and corollary concerning
$\mu^{y,N,\mathcal G}_\mathrm{mean}$ and $\mu^{y,N, \Phi}_\mathrm{mean}.$}
\begin{theorem}\label{thm:hell_mean} Under the Assumptions of {Lemma} \ref{thm:bound_zmean}, there exist constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\begin{align*}
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{mean}) & \leq C_1 \left\|\mathcal G - m^\mathcal G _N \right\|_{L^2_{\mu_0}(X; \mathbb{R}^J)}, \\
\text{and} \quad d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N, \Phi}_\mathrm{mean}) &\leq C_2 \left\|\Phi - m^\Phi _N \right\|_{L^2_{\mu_0}(X)}.
\end{align*}
\end{theorem}
\begin{proof}
Let us first consider $\mu^{y,N,\mathcal G}_\mathrm{mean}$. By definition of the Hellinger distance, we have
\begin{align*}
&2 \; d_{\mbox {\tiny{\rm Hell}}}^2(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{mean}) = \int_X \left( \sqrt{\frac{d\mu^y}{d\mu_0}} - \sqrt{\frac{d\mu^{y,N,\mathcal G}_\mathrm{mean}}{d\mu_0}} \right)^2 \mu_0(\mathrm{d}u) \\
&\leq \frac{2}{Z} \int_X \left(\exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big)\right)^2 \mu_0(\mathrm{d}u) \\
&\qquad + 2 \, Z_{N,\mathcal G}^\mathrm{mean} \left(Z^{-1/2} - (Z_{N, \mathcal G}^\mathrm{mean})^{-1/2} \right)^2 \\
&=: I + II.
\end{align*}
For the first term, we use the local Lipschitz continuity of the exponential function, together with the equality $a^2 - b^2 = (a-b)(a+b)$ and the reverse triangle inequality to bound
\begin{align*}
\frac{Z}{2} \; I &= \int_X \left(\exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big)\right)^2 \mu_0(\mathrm{d}u) \\
&\leq \int_X \left( \frac{1}{2 \sigma_\eta^2} \left( \left\| y - \mathcal G(u) \right\|^2 - \left\| y - m^\mathcal G _N(u) \right\|^2 \right) \right)^2 \mu_0(\mathrm{d}u) \\
&= \int_X \frac{1}{4 \sigma_\eta^4} \left( \| y - \mathcal G(u)\| + \|y - m^\mathcal G _N(u)\| \right)^2 \left\|\mathcal G(u) - m^\mathcal G _N(u) \right\|^2 \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{4 \sigma_\eta^4} \sup_{u \in X} \left( \| y - \mathcal G(u)\| + \|y - m^\mathcal G _N(u)\| \right)^2 \left\|\mathcal G(u) - m^\mathcal G _N(u) \right\|_{L^2_{\mu_0}(X; \mathbb{R}^J)}^2
\end{align*}
As in equation \eqref{eq:bound_lemzmean}, the first supremum can be bounded independently of $U$ and $N$, from which it follows that
\[
I \leq C \left\|\mathcal G(u) - m^\mathcal G _N(u) \right\|_{L^2_{\mu_0}(X; \mathbb{R}^J)}^2,
\]
for a constant $C$ independent of $U$ and $N$.
For the second term, a very similar argument, together with {Lemma} \ref{thm:bound_zmean} and Jensen's inequality, shows
\begin{align*}
II &= 2 \, Z_{N,\mathcal G}^\mathrm{mean} \left(Z^{-1/2} - (Z_{N, \mathcal G}^\mathrm{mean})^{-1/2} \right)^2 \\
&\leq 2 \, Z_{N,\mathcal G}^\mathrm{mean} \max(Z^{-3},(Z_{N, \mathcal G}^\mathrm{mean})^{-3}) |Z - Z_{N, \mathcal G}^\mathrm{mean}|^2 \\
&= 2 \, Z_{N,\mathcal G}^\mathrm{mean} \max(Z^{-3},(Z_{N\mathcal G}^\mathrm{mean})^{-3}) \left(\int_X \exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{4 \sigma_\eta^2} \left\| y - m^\mathcal G _N(u) \right\|^2\big) \mu_0(\mathrm{d}u)\right)^2 \\
&\leq C \left\|\mathcal G(u) - m^\mathcal G _N(u) \right\|_{L^2_{\mu_0}(X; \mathbb{R}^J)}^2,
\end{align*}
for a constant $C$ independent of $U$ and $N$.
The proof for $\mu^{y,N, \Phi}_\mathrm{mean}$ is similar. We use an identical corresponding splitting of the Hellinger distance $d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N, \Phi}_\mathrm{mean}) \leq I + II$. Using the local Lipschitz continuity of the exponential function,
we have
\begin{align*}
\frac{Z}{2} \; I = \int_X \left(\exp\big(-\Phi(u)\big) - \exp\big(- m^\Phi _N(u)\big)\right)^2 \mu_0(\mathrm{d}u) \leq 2 \left\|\Phi(u) - m^\Phi _N(u) \right\|_{L^2_{\mu_0}(X)}^2 .
\end{align*}
Using {Lemma} \ref{thm:bound_zmean} and Jensen's inequality, we furthermore have
\begin{align*}
II &\leq 2 \, Z_{N,\Phi}^\mathrm{mean} \max(Z^{-3},(Z_{N, \Phi}^\mathrm{mean})^{-3}) \left(\int_X \exp\big(-\Phi(u)\big) - \exp\big(- m^\Phi _N(u)\big) \mu_0(\mathrm{d}u)\right)^2 \\
&\leq C \left\|\Phi(u) - m^\Phi _N(u) \right\|_{L^2_{\mu_0}(X)}^2,
\end{align*}
for a constant $C$ independent of $U$ and $N$.
\end{proof}
We remark here that Theorem \ref{thm:hell_mean} does not make any assumptions on the predictive means $m_N^\mathcal G$ and $m_N^\Phi$ other than the requirement that $\sup_{u \in X} \| \mathcal G(u) - m^\mathcal G _N(u) \|$ and $\sup_{u \in X} | \Phi(u) - m^\Phi_N(u) |$ converge to 0 as $N$ tends to $\infty$. Whether the predictive means are defined as in \eqref{eq:pred_eq}, or are derived by alternative approaches to Gaussian process regression \cite{rasmussen_williams}, does not affect the conclusions of Theorem \ref{thm:hell_mean}.
Under Assumption \ref{ass:reg}, we can combine Theorem \ref{thm:hell_mean} with Proposition \ref{prop:mean_conv} (or Proposition \ref{prop:mean_conv_int})
{with $\beta=0$} to obtain error bounds in terms of the fill distance of the design points.
\begin{corollary}\label{cor:rate_mean} Suppose $m_N^\Phi$ and $m_N^{\mathcal G^j}$, $j=1,\dots,J$, are defined as in \eqref{eq:pred_eq}, with Mat\`ern kernel $k=k_{\nu,\lambda,\sigma_k^2}$. Suppose Assumption \ref{ass:reg} { holds with $s=\nu + K/2$,} and the assumptions of Proposition \ref{prop:mean_conv} and Theorem \ref{thm:hell_mean} are satisfied. Then there exist constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\begin{align*}
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{mean}) \leq C_1 h_U^{\nu + K/2}, \quad
\text{and} \quad d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N, \Phi}_\mathrm{mean}) \leq C_2 h_U^{\nu + K/2}.
\end{align*}
\end{corollary}
{If Assumption \ref{ass:reg} holds only for some $s < \nu + K/2$, an analogue of Corollary \ref{cor:rate_mean} can be proved using Proposition \ref{prop:mean_conv_int} with $\beta = 0$. As already discussed in section \ref{ssec:gp_sk}, translating convergence rates in terms of the fill distance $h_U$ into rates in terms of the number of points $N$ typically leads to a strong dependence on the input dimension $K$. For uniform tensor grids $U$, the rates of convergence in $N$ predicted by Corollary \ref{cor:rate_mean} are given in Table \ref{tbl:conv}.}
\subsection{Approximations based on the predictive process}
Alternative to the mean-based approximations considered in the previous section, we now consider approximations to the posterior distribution $\mu^y$ obtained using the full predictive processes $\mathcal G_N$ and $\Phi_N$. {In contrast to the mean, the full Gaussian processes also carry information about the uncertainty in the emulator due to only using a finite number of function evaluations to construct it.}
For the remainder of this section, we denote by $\nu^\mathcal G_N$ the distribution of $\mathcal G_N$ and by $\nu^\Phi_N$ the distribution of $\Phi_N$, {for $N \in \mathbb N \cup \{0\}$}. We note that since the process $\mathcal G_N$ consists of $J$ independent Gaussian processes $\mathcal G_N^j$, the measure $\nu^\mathcal G_N$ is a product measure, $\nu^\mathcal G_N = \prod_{j=1}^J \nu^{\mathcal G^j}_N$. {$\Phi_N$ is a Gaussian process with mean $m_N^\Phi$ and covariance kernel $k_N$, and $\mathcal G_N^j$, for $j=1, \dots, J$, is a Gaussian process with mean $m_N^{\mathcal G^j}$ and covariance kernel $k_N$.} Replacing $\mathcal G$ by $\mathcal G_N$ in \eqref{eq:def_like}, we obtain the approximation $\mu^{y,N,\mathcal G}_\mathrm{sample}$ given by
\begin{equation*}\label{eq:rad_nik_sample}
\frac{d\mu^{y,N,\mathcal G}_\mathrm{sample}}{d\mu_0}(u) = \frac{1}{Z_{N, \mathcal G}^\mathrm{sample}} \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big),
\end{equation*}
where
\[
Z_{N, \mathcal G}^\mathrm{sample}= \mathbb{E}_{\mu_0}\Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big).
\]
Similarly, we define for the predictive process $\Phi_N$ the approximation $\mu^{y,N,\Phi}_\mathrm{sample}$ by
\begin{align*}
\frac{d\mu^{y,N,\Phi}_\mathrm{sample}}{d\mu_0}(u) = \frac{1}{Z_{N, \Phi}^\mathrm{sample}} \exp\big(- \Phi_N(u)\big), \qquad Z_{N, \Phi}^\mathrm{sample} = \mathbb{E}_{\mu_0}\Big(\exp\big(- \Phi_N(u)\big)\Big).
\end{align*}
The measures $\mu^{y,N,\mathcal G}_\mathrm{sample}$ and $\mu^{y,N,\Phi}_\mathrm{sample}$ are random approximations of the deterministic measure $\mu^y.$ { The uncertainty in the posterior distribution introduced in this way can be thought of representing the uncertainty in the emulator, which in applications can be large (or comparable) to the uncertainty present in the observations. A user may want to take this into account to "inflate" the variance of the posterior distribution.}
Deterministic approximations of the posterior distribution $\mu^y$ can now be obtained by taking the expected value with respect to the predictive processes $\mathcal G_N$ and $\Phi_N$. This results in the marginal approximations
\begin{align*}
\frac{d\mu^{y,N,\mathcal G}_\mathrm{marginal}}{d\mu_0}(u) &= \frac{1}{\mathbb{E}_{\nu_N^\mathcal G}(Z_{N, \mathcal G}^\mathrm{sample})} \mathbb{E}_{\nu_N^\mathcal G}\Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big), \\
\frac{d\mu^{y,N,\Phi}_\mathrm{marginal}}{d\mu_0}(u) &= \frac{1}{\mathbb{E}_{\nu_N^\Phi} (Z_{N, \Phi}^\mathrm{sample})} \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N (u) \big)\Big).
\end{align*}
Note that by Tonelli's Theorem {(\cite{rudin}, a version of Fubini's Theorem for non-negative integrands)}, the measures $\mu^{y,N,\mathcal G}_\mathrm{marginal}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$ are indeed probability measures. { It can be shown that the above approximation of the likelihood is optimal in the sense that it minimises the $L^2$-error \cite{sn16}. In contrast to the approximation based on only the mean of the emulator, this approximation also takes into account the uncertainty of the emulator, although only in an averaged sense. The likelihood in the marginal approximations $\mu^{y,N,\mathcal G}_\mathrm{marginal}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$ involves computing an expectation. Methods from the pseudo-marginal MCMC literature \cite{ar09} could be used within an MCMC method in this context.}
Before proving bounds on the error in the marginal approximations $\mu^{y,N,\mathcal G}_\mathrm{marginal}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$ in section \ref{ssec:gp_app_marg}, and the error in the random approximations $\mu^{y,N,\mathcal G}_\mathrm{sample}$ and $\mu^{y,N,\Phi}_\mathrm{sample}$ in section \ref{ssec:gp_app_rand}, we crucially prove boundedness of the normalising constants $Z_{N, \mathcal G}^\mathrm{sample}$ and $Z_{N, \Phi}^\mathrm{sample}$ in section \ref{ssec:gp_app_z}.
\subsubsection{Moment bounds on $Z_{N, \mathcal G}^\mathrm{sample}$ and $Z_{N, \Phi}^\mathrm{sample}$}\label{ssec:gp_app_z}
Firstly, we recall the following classical results from the theory of Gaussian measures on Banach spaces \cite{daprato_zabczyk,adler}.
\begin{proposition}\label{prop:fernique} {\em (Fernique's Theorem)} Let $E$ be a separable Banach space and $\nu$ a centred Gaussian measure on $(E, \mathcal B(E))$. If $\lambda, r >0$ are such that
\[
\log \left( \frac{1- \nu(f \in E : \|f\|_E \leq r)}{\nu(f \in E : \|f\|_E \leq r)} \right) \leq -1 -32 \lambda r^2,
\]
then
\[
\int_E \exp(\lambda \|f\|_E^2) \nu(\mathrm{d} f) \; \leq \; \exp(16 \lambda r^2) + \frac{e^2}{e^2-1.}
\]
\end{proposition}
{\begin{proposition}\label{prop:borell_tis} {\em (Borell-TIS Inequality\footnote{{The Borell-TIS inequality is named after the mathematicians Borell and Tsirelson, Ibragimov and Sudakov, who independently proved the result.}})} Let $f$ be a scalar, almost surely bounded Gaussian field on a compact domain $T \subseteq \mathbb{R}^K$, with zero mean $\mathbb{E}(f(t)) = 0$ and bounded variance $0 < \sigma^2_f := \sup_{t \in T} \mathbb{V}(f(t)) < \infty$. Then $\mathbb{E}(\sup_{t \in T} f(t)) < \infty$, and for all $r > 0$,
\[
\mathbb P(\sup_{t \in T} f(t) - \mathbb{E}(\sup_{t \in T} f(t)) > r ) \leq \exp(-r^2/2\sigma_f^2).
\]
\end{proposition}
\begin{proposition}\label{prop:sud_fern} {\em (Sudakov-Fernique Inequality)} Let $f$ and $g$ be scalar, almost surely bounded Gaussian fields on a compact domain $T \subseteq \mathbb{R}^K$. Suppose $\mathbb{E}((f(t)-f(s))^2) \leq \mathbb{E}((g(t)-g(s))^2)$ and $\mathbb{E}(f(t)) = \mathbb{E}(g(t))$, for all $s,t \in T$. Then
\[
\mathbb{E}(\sup_{t \in T} f(t)) \leq \mathbb{E}(\sup_{t \in T} g(t)).
\]
\end{proposition}
}
{Using these results, we are now ready to prove bounds on moments of $Z_{N, \mathcal G}^\mathrm{sample}$ and $Z_{N, \Phi}^\mathrm{sample}$, similar to those proved in {Lemma} \ref{thm:bound_zmean}.
The reader interested purely in approximation results for the posterior
can simply read the statements of the following two lemmas, and then
proceed directly to subsections \ref{ssec:gp_app_marg} and \ref{ssec:gp_app_rand}.}
{Recall that, as in \eqref{eq:gp}, $\Phi_0$ and $\mathcal G^j_0$ denote the initial Gaussian process models for $\Phi$ and $\mathcal G^j$, respectively, and, as in \eqref{eq:gp_pred}, $\Phi_N$ and $\mathcal G^j_N$ denote the conditioned Gaussian process models for $\Phi$ and $\mathcal G^j$, respectively.}
\begin{lemma}\label{thm:bound_zsample} {Let $X \subseteq \mathbb{R}^K$ be compact.} Suppose $\sup_{u \in X} \left\| \mathcal G(u) - m^\mathcal G _N(u) \right\|$, $\sup_{u \in X} \left| \Phi(u) - m^\Phi _N(u) \right|$ {and $\sup_{u \in X} k_N(u,u)$} converge to 0 as $N$ tends to infinity, and assume $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G < \infty$. {Suppose the assumptions of the Sudakov-Fernique inequality hold, for $g=\Phi_0$ and $f=\Phi_N - m_N^{\Phi}$, and for $g=\mathcal G^j_0$ and $f=\mathcal G^j_N - m_N^{\mathcal G^j}$, for $j \in \{1,\dots,J\}.$} Then, for any $1 \leq p < \infty$, there exist positive constants $C_1$ and $C_2$, independent of $U$ and $N$, such that {for all $N$ sufficiently large}
\[
C_1^{-1} \leq \mathbb{E}_{\nu_N^\mathcal G} \big((Z_{N,\mathcal G}^\mathrm{sample})^p\big) \leq 1, \qquad \text{and} \qquad 1 \leq \mathbb{E}_{\nu_N^\mathcal G} \big((Z_{N,\mathcal G}^\mathrm{sample})^{-p}\big) \leq C_1.
\]
and
\[
C_2^{-1} \leq \mathbb{E}_{\nu_N^\Phi} \big((Z_{N,\Phi}^\mathrm{sample})^p\big) \leq C_2, \qquad \text{and} \qquad C_2^{-1} \leq \mathbb{E}_{\nu_N^\Phi} \big((Z_{N,\Phi}^\mathrm{sample})^{-p}\big) \leq C_2.
\]
\end{lemma}
\begin{proof}
We start with $Z_{N, \mathcal G}^\mathrm{sample}$. Since the potential $\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2$ is non-negative and $\int_X \mu_0(\mathrm d u) = 1 = \int_{C^0(X; \mathbb{R}^J)} \nu^\mathcal G_N (\rm{d} \mathcal G_N)$, we have for any $1 \leq p < \infty$,
\[
\mathbb{E}_{\nu_N^\mathcal G}((Z_{N, \mathcal G}^\mathrm{sample})^p) = \int_{C^0(X; \mathbb{R}^J)} \left(\int_X \exp\big(- \frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u \right\|^2\big) \mu_0(\rm{d} u) \right)^p \nu^\mathcal G_N (\rm{d} \mathcal G_N) \leq 1.
\]
From Jensen's inequality, it then follows that
\[
\mathbb{E}_{\nu_N^\mathcal G}((Z_{N, \mathcal G}^\mathrm{sample})^{-p}\big) \geq \big(\mathbb{E}_{\nu_N^\mathcal G}((Z_{N, \mathcal G}^\mathrm{sample})^p)\big)^{-1} \geq 1.
\]
To determine $C_1$, we use the triangle inequality to bound, for any $1 \leq p < \infty$,
\begin{align*}
&\mathbb{E}_{\nu_N^\mathcal G}\big((Z_{N,\mathcal G}^\mathrm{sample})^{-p}\big) = \int_{C^0(X; \mathbb{R}^J)} \left(\int_X \exp\big(- \frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N(u) \right\|^2\big) \mu_0(\rm{d} u)\right)^{-p} \nu_N^\mathcal{G} (\rm{d} \mathcal G_N) \\
&\leq \int_{C^0(X; \mathbb{R}^J)} \big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \sup_{u \in X} \left\| y - \mathcal G_N(u) \right\|^2\big)\big)^{-p} \nu_N^\mathcal G (\rm{d} \mathcal G_N) \\
&= \int_{C^0(X; \mathbb{R}^J)} \exp\big(\frac{p}{2 \sigma_\eta^2} \sup_{u \in X} \left\| y - \mathcal G_N(u) \right\|^2\big) \nu_N^\mathcal G (\rm{d} \mathcal G_N) \\
&\leq \exp\left(\frac{\sup_{u \in X} \|y - m_N^\mathcal G(u)\|^2}{2 p^{-1} \sigma_\eta^2} \right) \int_{C^0(X; \mathbb{R}^J)} \exp\left(\frac{\sup_{u \in X} \|\mathcal G_N(u) - m_N^\mathcal G(u)\|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^\mathcal G_N (\rm{d} \mathcal G_N).
\end{align*}
The first factor {can} be bounded independently of $U$ and $N$ using the triangle inequality, together with $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G$ and $\sup_{u \in X} \left\| \mathcal G(u) - m^\mathcal G _N(u) \right\| \rightarrow 0$ as $N \rightarrow \infty$. For the second factor, we use Fernique's Theorem (Proposition \ref{prop:fernique}). First, we note that {(using independence)}
\begin{align*}
&\int_{C^0(X; \mathbb{R}^J)} \exp\left(\frac{\sup_{u \in X} \|\mathcal G_N(u) - m_N^\mathcal G(u)\|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^\mathcal G_N (\rm{d} \mathcal G_N) \\
&= \int_{C^0(X; \mathbb{R}^J)} \exp\left(\frac{\sup_{u \in X} \sum_{j=1}^J |{\mathcal G^j}_N(u) - m_N^{\mathcal G^j}(u)|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^\mathcal G_N (\rm{d} \mathcal G_N) \\
&\leq \int_{C^0(X; \mathbb{R}^J)} \exp\left(\sum_{j=1}^J \frac{\sup_{u \in X} |{\mathcal G^j}_N(u) - m_N^{\mathcal G^j}(u)|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^\mathcal G_N (\rm{d} \mathcal G_N) \\
&= \int_{C^0(X; \mathbb{R}^J)} \prod_{j=1}^J \exp\left( \frac{\sup_{u \in X} |{\mathcal G^j}_N(u) - m_N^{\mathcal G^j}(u)|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^\mathcal G_N (\rm{d} \mathcal G_N) \\
&= \prod_{j=1}^J \int_{C^0(X)} \exp\left( \frac{\sup_{u \in X} |{\mathcal G^j}_N(u) - m_N^{\mathcal G^j}(u)|^2}{2 p^{-1} \sigma_\eta^2} \right) \nu^{\mathcal G^j}_N (\rm{d} {\mathcal G^j_N}).
\end{align*}
{It remains to show that, for $N$ sufficiently large, the assumptions of Fernique's Theorem hold for $\lambda = p \sigma_\eta^{-2}/2$ and a value of $r$ independent of $U$ and $N$, for $\nu$ equal to the push-forward of $\nu_N^{\mathcal G^j}$ under the map $T(f) = f - m_N^{\mathcal G^j}$.
Denote by $B^{\mathcal G^j}_{N,r} \subset C^0(X)$ the set of all functions $f$ such that $\|f - m^{\mathcal G^j}_N\|_{C^0(X)} \leq r$, for some fixed $r > 0$ and $1 \leq j \leq J$. Let $\overline{\mathcal G^j_N} = \mathcal G^j_N - m^{\mathcal G^j}_N$. By the Borell-TIS Inequality, we have for all $r > \mathbb{E}(\sup_{u \in X} (\overline{\mathcal G^j_N}(u) )$,
\begin{equation*}
\nu_N^{\mathcal G^j}(\mathcal G^j_N : \sup_{u \in X} \overline{\mathcal G^j_N}(u) > r) \leq \exp\left(-\frac{\big(r - \mathbb{E}(\sup_{u \in X} \overline{\mathcal G^j_N}(u)\big)^2}{2 \sigma_{N}^2} \right),
\end{equation*}
where $\sigma_{N}^2 := \sup_{u \in X} k_N(u,u)$. By assumption, $\mathbb{E}_{\nu_N^j}((\overline{\mathcal G^j_N}(u) - \overline{\mathcal G^j_N}(u'))^2) \leq \mathbb{E}_{\nu_0^j}((\mathcal G^j_0(u) - \mathcal G^j_0(u'))^2)$, and so $\mathbb{E}(\sup_{u \in X} (\overline{\mathcal G^j_N}(u) ) \leq \mathbb{E}(\sup_{u \in X} (\mathcal G^j_0(u) )$, by the Sudakov-Fernique Inequality. We can hence choose $r > \mathbb{E}(\sup_{u \in X} (\mathcal G^j_0(u) )$, independent of $U$ and $N$, such that the bound
\begin{equation*}
\nu_N^{\mathcal G^j}(\mathcal G^j_N : \sup_{u \in X} \overline{\mathcal G^j_N}(u) > r) \leq \exp\left(-\frac{\big(r - \mathbb{E}(\sup_{u \in X} \mathcal G^j_0(u)\big)^2}{2 \sigma_{N}^2} \right),
\end{equation*}
holds for all $N \in N$.
By assumption we have $\sigma_{N}^2 \rightarrow 0$ as $N \rightarrow \infty$, and by the symmetry of Gaussian measures, we hence have $\nu_N^{\mathcal G^j}(B^{\mathcal G^j}_{N,r}) \rightarrow 1$ as $N \rightarrow \infty$, for all $r > \mathbb{E}(\sup_{u \in X} (\mathcal G^j_0(u) )$. For $N = N(p)$ sufficiently large, the inequality
\[
\log \left( \frac{1- \nu_N^{\mathcal G^j}(B^{\mathcal G^j}_{N,r})}{\nu_N^{\mathcal G^j}(B^{\mathcal G^j}_{N,r})} \right) \leq -1 -32 \lambda r^2,
\]
in the assumptions of Fernique's Theorem is then satisfied, for $\lambda = p \sigma_\eta^{-2}/2$ and $r > \mathbb{E}(\sup_{u \in X} (\mathcal G^j_0(u) )$, both independent of $U$ and $N$. Hence, $\mathbb{E}_{}\big((Z_{N, \mathcal G}^\mathrm{sample})^{-p}\big) \leq C_1(p)$, for a constant $C_1(p) < \infty$ independent of $U$ and $N$.}
From Jensen's inequality, it then finally follows that
\[
\mathbb{E}_{\nu_N^\mathcal G}((Z_{N, \mathcal G}^\mathrm{sample})^{p}\big) \geq \big(\mathbb{E}_{\nu_N^\mathcal G}((Z_{N, \mathcal G}^\mathrm{sample})^{-p})\big)^{-1} \geq C_1^{-1}(p).
\]
The proof for $Z_{N, \Phi}^\mathrm{sample}$ is similar.
Using $\int_X \mu_0(\mathrm d u) = 1$ and the triangle inequality, we have
\begin{align*}
\mathbb{E}_{\nu_N^\Phi}\big((Z_{N,\Phi}^\mathrm{sample})^p\big) &= \int_{C^0(X)} \left(\int_X \exp\big(- \Phi_N(u)\big) \mu_0(\rm{d} u) \right)^p \nu^\Phi_N (\rm{d} \Phi_N) \\
&\leq \int_{C^0(X)} \exp\big( p \sup_{u \in X} |\Phi_N(u)| \big) \nu^\Phi_N (\rm{d} \Phi_N) \\
&\leq \exp\big( p \sup_{u \in X} |m_N^\Phi(u)| \big) \int_{C^0(X)} \exp\big( p \sup_{u \in X} |\Phi_N(u) - m_N^\Phi(u)| \big) \nu^\Phi_N (\rm{d} \Phi_N).
\end{align*}
The first factor can be bounded independently of $U$ and $N$, since $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G$ and $\sup_{u \in X} \left| \Phi(u) - m^\Phi _N(u) \right|$ converges to 0 as $N \rightarrow \infty$. {The second factor can be bounded by Fernique's Theorem. Using the same proof technique as above, we can show that $\nu^\Phi_N(B^\Phi_{N,r}) \rightarrow 1$ as $N \rightarrow \infty$ for all $r > \mathbb{E}(\sup_{u \in X} (\Phi_0(u) )$, where $B^\Phi_{N,r} \subset C^0(X)$ denotes the set of all functions $f$ such that $\|f - m^\Phi_N\|_{C^0(X)} \leq r$.
Hence, it is possible to choose $r >0$, independent of $U$ and $N$, such that the assumptions of Fernique's Theorem hold for $\nu$ equal to the push-forward of $\nu_N^\Phi$ under the map $T(f) = f - m_N^\Phi$, for some $\lambda > 0$ also independent of $U$ and $N$. By Young's inequality, we have
\[
\exp\big( p \sup_{u \in X} |\Phi_N(u) - m_N^\Phi(u)| \big) \leq \exp\big( \lambda \sup_{u \in X} |\Phi_N(u) - m_N^\Phi(u)|^2 + p^2/4 \lambda \big),
\]
and it follows that $\mathbb{E}_\omega\big((Z_{N,\Phi}^\mathrm{sample})^p\big) \leq C_2(p)$, for a constant $C_2(p) < \infty$ independent of $U$ and $N$, for any $1 \leq p < \infty$.}
Furthermore, we note
\begin{align*}
\mathbb{E}_{\nu_N^\Phi}\big((Z_{N,\Phi}^\mathrm{sample})^{-p}\big) \leq \int_{C^0(X; \mathbb{R})} \exp\big(p \sup_{u \in X} |\Phi_N(u)|\big) \nu_N^\Phi (\rm{d} \Phi_N) \leq C_2(p).
\end{align*}
By Jensen's inequality, we finally have $\mathbb{E}_{\nu_N^\Phi}\big((Z_{N,\Phi}^\mathrm{sample})^{-p}\big) \geq C_2(p)^{-1} $ and $\mathbb{E}_{\nu_N^\Phi} \big((Z_{N,\Phi}^\mathrm{sample})^p\big) \geq C_2(p)^{-1}$.
\end{proof}
{ We would like to point out here that the assumption that $\sup_{u \in X} k_N(u,u)$ converges to 0 as $N$ tends to infinity in Lemma \ref{thm:bound_zsample} is crucial in order to enable the choice of any $1 \leq p < \infty$. This is related to the fact that the parameter $\lambda$ needs to be sufficiently small compared to $\sup_{u \in X} k_N(u,u)$ in order to satisfy the assumptions of Fernique's Theorem.}
{In {Lemma} \ref{thm:bound_zsample}, we supposed that the assumptions of the Sudakov-Fernique inequality hold, for $g=\Phi_0$ and $f=\Phi_N - m_N^{\Phi}$, and for $g=\mathcal G^j_0$ and $f=\mathcal G^j_N - m_N^{\mathcal G^j}$, for $j \in \{1,\dots,J\}$. This is an assumption on the predictive variance $k_N$. In the following Lemma, we prove this assumption for the predictive variance given in \eqref{eq:pred_eq}.}
{\begin{lemma}\label{lem:predvar_lip} Suppose the predictive variance $k_N$ is given by \eqref{eq:pred_eq}. Then the assumptions of the Sudakov-Fernique inequality hold, for $g=\Phi_0$ and $f=\Phi_N - m_N^{\Phi}$, and for $g=\mathcal G^j_0$ and $f=\mathcal G^j_N - m_N^{\mathcal G^j}$, for $j \in \{1,\dots,J\}$.
\end{lemma}
\begin{proof}
We give a proof for $g=\Phi_0$ and $f=\Phi_N - m_N^{\Phi}$, the proof for $g=\mathcal G^j_0$ and $f=\mathcal G^j_N - m_N^{\mathcal G^j}$, for $j \in \{1,\dots,J\}$, is identical.
For any $u, u' \in X$, we have $\mathbb{E}_{\nu^{\Phi}_0} (\Phi_0(u)) = 0 = \mathbb{E}_{\nu^{\Phi}_N} (\Phi_N(u) - m_N^{\Phi}(u))$, and
\begin{align*}
\mathbb{E}_{\nu^{\Phi}_N} \left( ((\Phi_N(u) - m^{\Phi} _N(u)) - (\Phi_N(u') - m^{\Phi} _N(u')))^2\right) &= k_N(u,u) - k_N(u,u') - k_N(u',u) + k_N(u',u'), \\
\mathbb{E}_{\nu^{\Phi}_0} \left( (\Phi_0(u) - \Phi_0(u'))^2\right) &= k(u,u) - k(u,u') - k(u',u) + k(u',u').
\end{align*}
By \eqref{eq:pred_eq}, we have
\[
k_N(u,u') = k(u,u') - k(u,U)^T \; K(U,U)^{-1} \; k(u',U),
\]
and so
\begin{align*}
&\mathbb{E}_{\nu^{\Phi}_0} \left( (\Phi_0(u) - \Phi_0(u'))^2\right) - \mathbb{E}_{\nu^{\Phi}_N} \left( ((\Phi_N(u) - m^{\Phi} _N(u)) - (\Phi_N(u') - m^{\Phi} _N(u')))^2\right) \\
&= \big( k(u,U)^T - k(u',U)^T \big)\; K(U,U)^{-1} \; \big(k(u,U) - k(u',U)\big) \\
&\geq 0,
\end{align*}
since the matrix $K(U,U)^{-1}$ is positive definite.
\end{proof}}
We are now ready to prove bounds on the approximation error in the posterior distributions.
\subsubsection{Error in the marginal approximations $\mu^{y,N,\mathcal G}_\mathrm{marginal}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$}\label{ssec:gp_app_marg}
We start by analysing the error in the marginal approximations $\mu^{y,N,\mathcal G}_\mathrm{marginal}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$.
\begin{theorem}\label{thm:hell_marginal} Under the assumptions of {Lemma} \ref{thm:bound_zsample}, there exist constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\begin{align*}
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal}) &\leq C_1 \left\|\Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G - \mathcal G_N \|^{1 + \delta} \Big)\Big)^{1/(1+\delta)}\right\|_{L^2_{\mu_0}(X)}, \quad \text{for any } \delta > 0, \\
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{marginal}) &\leq C_2 \left\|\mathbb{E}_{\nu_N^\Phi} \left( |\Phi - \Phi_N|\right)\right\|_{L^2_{\mu_0}(X)}.
\end{align*}
\end{theorem}
\begin{proof}
We start with $\mu^{y,N,\mathcal G}_\mathrm{marginal}$. By the definition of the Hellinger distance, we have
\begin{align*}
&2 \; d_{\mbox {\tiny{\rm Hell}}}^2(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal}) = \int_X \left( \sqrt{\frac{d\mu^y}{d\mu_0}} - \sqrt{\frac{d\mu^{y,N,\mathcal G}_\mathrm{marginal}}{d\mu_0}} \right)^2 \mu_0(\mathrm{d}u) \\
&\leq \frac{2}{Z} \int_X \left(\sqrt{\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big)} - \sqrt{\mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)}\right)^2 \mu_0(\mathrm{d}u) \\
& \qquad + 2 \mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big) \left(Z^{-1/2} - \mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big)^{-1/2} \right)^2 \\
&= I + II.
\end{align*}
For the first term, we use the (in)equalities $a-b=(a^2-b^2)/(a+b)$ and $(\sqrt{a}+\sqrt{b})^2 \geq a + b$, for $a,b>0$, to derive
\begin{align*}
\frac{Z}{2} I &= \int_X \left(\sqrt{\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big)} - \sqrt{\mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)}\right)^2 \mu_0(\mathrm{d}u) \\
&\leq \int_X \frac{ \left( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \right)^2 }{\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)} \mu_0(\mathrm{d}u) \\
&\leq \sup_{u \in X} \left(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)\right)^{-1} \\
&\qquad \qquad \int_X \left( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \right)^2 \mu_0(\mathrm{d}u).
\end{align*}
For the first factor, using the convexity of $1/x$ on $(0,\infty)$, together with Jensen's inequality, we have for all $u \in X$ the bound
\begin{align*}
&\left(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)\right)^{-1} \\
& \qquad \leq \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big)^{-1} + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big)^{-1} \\
& \qquad \leq \exp\big(\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \\
& \qquad \leq \exp\big(\frac{1}{2 \sigma_\eta^2} \sup_{u \in X} \left\| y - \mathcal G (u) \right\|^2\big) + \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(\frac{1}{2 \sigma_\eta^2} \sup_{u \in X} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big).
\end{align*}
As in the proof of {Lemma} \ref{thm:bound_zsample}, it then follows by Fernique's Theorem that the right hand side can be bounded by a constant independent of $U$ and $N$.
For the second factor in the bound on $\frac{Z}{2} I$, the linearity of expectation, the local Lipschitz continuity of the exponential function, the equality $a^2-b^2 = (a-b)(a+b)$, the reverse triangle inequality and H\"older's inequality with conjugate exponents $p=(1+\delta)/\delta$ and $q=1+\delta$ give
\begin{align*}
&\int_X \left( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \mathbb{E}_{\nu_N^\mathcal G} \Big(\exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \right)^2 \mu_0(\mathrm{d}u)\\
&= \int_X \left( \mathbb{E}_{\nu_N^\mathcal G} \Big( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\leq 2 \int_X \left( \mathbb{E}_{\nu_N^\mathcal G} \Big( |\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2 - \frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2|\Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\leq \frac{2}{4 \sigma_\eta^4} \int_X \left(\mathbb{E}_{\nu_N^\mathcal G} \Big( \left(\left\| y - \mathcal G (u) \right\| + \left\| y - \mathcal G_N (u) \right\|\right) \|\mathcal G (u) - \mathcal G_N (u)\| \Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\leq \frac{2}{4 \sigma_\eta^4} \int_X \Big(\mathbb{E}_{\nu_N^\mathcal G} \Big( \left(\left\| y - \mathcal G (u) \right\| + \left\| y - \mathcal G_N (u) \right\|\right)^{(1+\delta)/\delta} \Big) \Big)^{2 \delta / (1+ \delta)} \Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G (u) - \mathcal G_N (u)\|^{1 + \delta} \Big) \Big)^{2/(1+\delta)} \mu_0(\mathrm{d}u) \\
&\leq \frac{2}{4 \sigma_\eta^4} \sup_{u \in X} \Big(\mathbb{E}_{\nu_N^\mathcal G} \Big( \left(\left\| y - \mathcal G (u) \right\| + \left\| y - \mathcal G_N (u) \right\|\right)^{(1+\delta)/\delta} \Big) \Big)^{2 \delta / (1+ \delta)} \int_X \Big( \mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G (u) - \mathcal G_N (u)\|^{1 + \delta} \Big) \Big)^{2/(1+\delta)} \mu_0(\mathrm{d}u),
\end{align*}
for any $\delta > 0$. The supremum in the above expression can be bounded by a constant independent of $U$ and $N$ by Fernique's Theorem as in the proof of
{Lemma} \ref{thm:bound_zsample}, since $\sup_{u \in X}\|\mathcal G(u)\| \leq C_\mathcal G < \infty$. It follows that there exists a constant $C$ independent of $U$ and $N$ such that
\[
\frac{Z}{2} I \leq C \left\|\Big( \mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G_N - \mathcal G\|^{1 + \delta} \Big) \Big)^{1/(1+\delta)}\right\|^2_{L^2_{\mu_0}(X)}.
\]
For the second term in the bound on the Hellinger distance, we have
\begin{equation*}
\frac{1}{2 \mathbb{E}_{\nu_N^\mathcal G}\big(Z_{N,\mathcal G}^\mathrm{sample}\big)} II = \left(Z^{-1/2} - \big(\mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big)\big)^{-1/2} \right)^2 \leq \max(Z^{-3},(\mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big))^{-3}) |Z - \mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big)|^2.
\end{equation*}
Using the linearity of expectation, Tonelli's Theorem and Jensen's inequality, we have
\begin{align*}
&\left|Z - \mathbb{E}_{\nu_N^\mathcal G} \big(Z_{N,\mathcal G}^\mathrm{sample}\big)\right|^2 \\
&= \left| \int_X \mathbb{E}_{\nu_N^\mathcal G} \Big( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \mu_0(\mathrm{d}u) \right|^2 \\
&\leq \int_X \left( \mathbb{E}_{\nu_N^\mathcal G} \Big( \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2\big) - \exp\big(-\frac{1}{2 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2\big)\Big) \right)^2 \mu_0(\mathrm{d}u).
\end{align*}
which can now be bounded as before. The first claim of the theorem now follows by {Lemma} \ref{thm:bound_zsample}.
The proof for $\mu^{y,N,\Phi}_\mathrm{marginal}$ is similar. We use an identical corresponding splitting of the Hellinger distance $d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{marginal}) \leq I + II$. For the first term, we have
\begin{align*}
\frac{Z}{2} I &= \int_X \left(\sqrt{\exp\big(-\Phi(u)\big)} - \sqrt{\mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N(u) \big)\Big)}\right)^2 \mu_0(\mathrm{d}u) \\
&\leq \sup_{u \in X} \left(\exp\big(-\Phi(u)\big) + \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N(u)\big)\Big)\right)^{-1} \\
&\qquad \qquad \int_X \left( \exp\big(-\Phi(u)\big) - \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N(u)\big)\Big) \right)^2 \mu_0(\mathrm{d}u).
\end{align*}
The first factor can again be bounded using Jensen's inequality,
\begin{align*}
\sup_{u \in X} \left(\exp\big(-\Phi(u)\big) + \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N(u)\big)\Big)\right)^{-1} \leq \exp\big(\sup_{u \in X} \Phi(u) \big) + \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big( \sup_{u \in X} \Phi_N(u)\big)\Big),
\end{align*}
which as in the proof of {Lemma} \ref{thm:bound_zsample}, can be bounded by a constant independent of $U$ and $N$ by Fernique's Theorem.
For the second factor in the bound on $\frac{Z}{2} I$, the linearity of expectation and the local Lipschitz continuity of the exponential function give
\begin{align*}
&\int_X \left( \exp\big(-\Phi(u)\big) - \mathbb{E}_{\nu_N^\Phi} \Big(\exp\big(-\Phi_N(u)\big)\Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\qquad = \int_X \left( \mathbb{E}_{\nu_N^\Phi} \Big( \exp\big(-\Phi(u)\big) - \exp\big(-\Phi_N(u)\big)\Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\qquad \leq 2 \int_X \left( \mathbb{E}_{\nu_N^\Phi} \Big( |\Phi(u) - \Phi_N(u)| \Big) \right)^2 \mu_0(\mathrm{d}u)\\
&\qquad = 2 \left\|\mathbb{E}_{\nu_N^\Phi} \left( |\Phi(u) - \Phi_N(u)| \right)\right\|^2_{L^2_{\mu_0}(X)}.
\end{align*}
For the second term in the bound on the Hellinger distance, the linearity of expectation, Tonelli's Theorem and Jensen's inequality give
\begin{align*}
\left|Z - \mathbb{E}_{\nu_N^\Phi} \big(Z_{N,\Phi}^\mathrm{sample}\big)\right|^2 \leq \int_X \left( \mathbb{E}_{\nu_N^\Phi} \Big( \exp\big(-\Phi(u)\big) - \exp\big(-\Phi_N(u)\big)\Big) \right)^2 \mu_0(\mathrm{d}u),
\end{align*}
which can now be bounded as before. The second claim of the theorem then follows by {Lemma} \ref{thm:bound_zsample}.
\end{proof}
Similar to Theorem \ref{thm:hell_mean}, Theorem \ref{thm:hell_marginal} provides error bounds for general Gaussian process emulators of $\mathcal G$ and $\Phi$. An example of a Gaussian process emulator that satisfies the assumptions of Theorem \ref{thm:hell_marginal} is the emulator defined by \eqref{eq:pred_eq}, however, other choices are possible. {As in Corollary \ref{cor:rate_mean}, we can now combine Assumption \ref{ass:reg}, Theorem \ref{thm:hell_marginal} and Proposition \ref{prop:mean_conv} with $\beta = 0$ to derive error bounds in terms of the fill distance.}
\begin{corollary}\label{cor:rate_marginal} Suppose $\mathcal G_N$ and $\Phi_N$ are defined as in \eqref{eq:pred_eq}, with Mat\`ern kernel $k=k_{\nu,\lambda,\sigma_k^2}$. Suppose Assumption \ref{ass:reg} { holds with $s=\nu + K/2$,} and the assumptions of Proposition \ref{prop:mean_conv} and Theorem \ref{thm:hell_marginal} are satisfied. Then there exist constants $C_1, C_2, C_3$ and $C_4$, independent of $U$ and $N$, such that
\begin{align*}
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal}) \leq C_1 h_U^{\nu + K/2} + C_2 h_U^\nu, \qquad
\text{and} \quad d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N, \Phi}_\mathrm{marginal}) \leq C_3 h_U^{\nu + K/2} + C_4 h_U^\nu.
\end{align*}
\end{corollary}
\begin{proof}We give the proof for $\mu^{y,N,\mathcal G}_\mathrm{marginal}$, the proof for $\mu^{y,N,\Phi}_\mathrm{marginal}$ is similar. Using Theorem \ref{thm:hell_marginal}, Jensen's inequality and the triangle inequality, we have
\begin{align*}
{d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal})^{2}} &\leq C \left\|\Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G - \mathcal G_N \|^{2} \Big) \Big)^{1/2}\right\|^2_{L^2_{\mu_0}(X)} \\
&= C \int_X \mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G(u) - \mathcal G_N(u) \|^{2} \Big) \mu_0(\mathrm{d}u) \\
&\leq 2C \int_X \|\mathcal G(u) - m_N^\mathcal G(u) \|^{2} \mu_0(\mathrm{d}u) + 2C \int_X \mathbb{E}_{\nu_N^\mathcal G} \Big(\|m_N^\mathcal G(u) - \mathcal G_N(u) \|^{2} \Big) \mu_0(\mathrm{d}u).
\end{align*}
The first term can be bounded by using Assumption \ref{ass:reg}, Proposition \ref{prop:native_matern} and Proposition \ref{prop:mean_conv},
\begin{align*}
\int_X \|\mathcal G(u) - m_N^\mathcal G(u) \|^{2} \mu_0(\mathrm{d}u) = \int_X \sum_{j=1}^J (\mathcal G^j(u) - m_N^{\mathcal G^j}(u))^2 \mu_0(\mathrm{d}u)
\leq C h_U^{2\nu + K} \sum_{j=1}^J \| \mathcal G^j\|^2_{H^{\nu + K/2}(X)},
\end{align*}
for a constant $C$ independent of $U$ and $N$.
The second term can be bounded by using Assumption \ref{ass:reg}, Proposition \ref{prop:native_matern}, Proposition \ref{prop:mean_conv}, Proposition \ref{prop:predvar_sup}, the linearity of expectation and the Sobolev Embedding Theorem
\begin{align*}
\int_X \mathbb{E}_{\nu_N^\mathcal G} \Big(\|m_N^\mathcal G(u) - \mathcal G_N(u) \|^{2} \Big) \mu_0(\mathrm{d}u) &= \int_X \mathbb{E}_{\nu_N^\mathcal G} \Big(\sum_{j=1}^J (m_N^{\mathcal G^j}(u) - \mathcal G_N^j(u))^2 \Big) \mu_0(\mathrm{d}u) \\
&= J \int_X k_N(u,u) \mu_0(\mathrm{d}u) \\
&\leq J \sup_{u \in X} \sup_{\|g\|_{H_k}=1} | g(u) - m^g_N(u)|^2 \\
&\leq C h_U^{2\nu},
\end{align*}
for a constant $C$ independent of $U$ and $N$. The claim of the corollary then follows.
\end{proof}
{If Assumption \ref{ass:reg} holds only for some $s < \nu + K/2$, an analogue of Corollary \ref{cor:rate_marginal} can be proved using Proposition \ref{prop:mean_conv_int} with $\beta = 0$.
Note that the term $h_U^\nu$ appearing in the bounds in Corollary \ref{cor:rate_marginal} corresponds to the error bound on $\|k_N^{1/2}\|_{L^2(X)}$, which does not appear in the error bounds for $\mu^{y,N,\mathcal G}_\mathrm{mean}$ and $\mu^{y,N,\Phi}_\mathrm{marginal}$ analysed in Corollary \ref{cor:rate_mean}. Due to the supremum over $g$ appearing in the expression for $k_N(u,u)$ in Proposition \ref{prop:predvar_sup}, we can only conclude on the lower rate of convergence $h_U^\nu$ for $\|k_N^{1/2}\|_{L^2(X)}$. This result appears to be sharp, and the lower rate of convergence $\nu$ is observed in some of the numerical experiments in section \ref{sec:num} (cf Figures \ref{fig:marg} and \ref{fig:rand}).}
\subsubsection{Error in the random approximations $\mu^{y,N,\mathcal G}_\mathrm{sample}$ and $\mu^{y,N,\Phi}_\mathrm{sample}$}\label{ssec:gp_app_rand}
We have the following result for the random approximations $\mu^{y,N,\mathcal G}_\mathrm{sample}$ and $\mu^{y,N,\Phi}_\mathrm{sample}$.
\begin{theorem}\label{thm:hell_sample} Under the Assumptions of {Lemma} \ref{thm:bound_zsample}, there exist constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\begin{align*}
\left(\mathbb{E}_{\nu_N^\mathcal G} \left(d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{sample})^2\right) \right)^{1/2} &\leq C_1 \left\|\Big( \mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G - \mathcal G_N \|^{2+ \delta} \Big) \Big)^{1/(2+\delta)}\right\|_{L^2_{\mu_0}(X)}, \\
\left(\mathbb{E}_{\nu_N^\Phi} \left(d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{sample})^2\right) \right)^{1/2} &\leq C_2 \left\|\Big( \mathbb{E}_{\nu_N^\Phi} \Big(|\Phi - \Phi_N |^{2} \Big) \Big)^{1/2}\right\|_{L^2_{\mu_0}(X)}.
\end{align*}
\end{theorem}
\begin{proof}
We start with $\mu^{y,N,\mathcal G}_\mathrm{sample}$. By the definition of the Hellinger distance and the linearity of expectation, we have
\begin{align*}
&\mathbb{E}_{\nu_N^\mathcal G} \left(2 \; d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{sample}) \right) ^2
= \mathbb{E}_{\nu_N^\mathcal G} \left( \int_X \left( \sqrt{\frac{d\mu^y}{d\mu_0}} - \sqrt{\frac{d\mu^{y,N,\mathcal G}_\mathrm{sample}}{d\mu_0}} \right)^2 \mu_0(\mathrm{d}u) \right)\\
&\leq \frac{2}{Z} \mathbb{E}_{\nu_N^\mathcal G} \left( \int_X \left(\exp \big(-\Phi(u)/2 \big) - \exp \big(-\Phi_N(u)/2 \big)\right)^2 \mu_0(\mathrm{d}u) \right) \\
& \qquad + 2 \; \mathbb{E}_{\nu_N^\mathcal G} \left( Z_{N,\mathcal G}^\mathrm{sample} |Z^{-1/2} - (Z_{N, \mathcal G}^\mathrm{sample})^{-1/2} |^2 \right) \\
&=: I + II.
\end{align*}
For the first term, Tonelli's Theorem, the local Lipschitz continuity of the exponential function, the equality $a^2-b^2 = (a-b)(a+b)$, the reverse triangle inequality and H\"older's inequality with conjugate exponents $p=(1+\delta)/ \delta$ and $q = 1+ \delta$ give
\begin{align*}
&\frac{Z}{2} I = \int_X \mathbb{E}_{\nu_N^\mathcal G} \left( \left( \exp \big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2 \big) - \exp \big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2 \big)\right)^2 \right) \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{\sigma_\eta^2} \int_X \mathbb{E}_{\nu_N^\mathcal G} \left( \left( \| y - \mathcal G(u)\|^2 - \| y - \mathcal G_N(u) \|^2 \right)^2 \right) \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{\sigma_\eta^2} \int_X \mathbb{E}_{\nu_N^\mathcal G} \Big( \left(\left\| y - \mathcal G_N (u) \right\| + \left\| y - \mathcal G (u) \right\|\right)^2 \|\mathcal G_N (u) - \mathcal G (u)\|^2 \Big) \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{\sigma_\eta^2} \int_X \Big(\mathbb{E}_{\nu_N^\mathcal G} \left( \left( \| y - \mathcal G(u)\| + \| y - \mathcal G_N(u) \|\right)^{2(1+\delta) / \delta})\right) \Big)^{\delta/(\delta+1)} \Big(\mathbb{E}_{\nu_N^\mathcal G} \left( \|\mathcal G(u) - \mathcal G_N(u)\|^{2(1+\delta)} \right) \Big)^{1/(1+\delta)} \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{\sigma_\eta^2} \sup_{u \in X} \Big(\mathbb{E}_{\nu_N^\mathcal G} \left( \left( \| y - \mathcal G(u)\| + \| y - \mathcal G_N(u) \|\right)^{2(1+\delta) / \delta})\right) \Big)^{\delta/(\delta+1)} \int_X \Big(\mathbb{E}_{\nu_N^\mathcal G} \left( \|\mathcal G(u) - \mathcal G_N(u)\|^{2(1+\delta)} \right) \Big)^{1/(1+\delta)} \mu_0(\mathrm{d}u).
\end{align*}
for any $\delta > 0$. The supremum in the above bound can be bounded independently of $U$ and $N$ by Fernique's Theorem as in the proof of {Lemma} \ref{thm:bound_zsample}. It follows that there exists a constant $C$ independent of $U$ and $N$ such that
\[
\frac{Z}{2} I \leq C \; \left\|\Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G_N - \mathcal G\|^{2(1+ \delta)} \Big) \Big)^{1/2(1+\delta)}\right\|^2_{L^2_{\mu_0}(X)}.
\]
For the second term in the bound on the Hellinger distance, we have
\begin{align*}
\frac{1}{2} II &= \mathbb{E}_{\nu_N^\mathcal G} \left( Z_{N,\mathcal G}^\mathrm{sample} |Z^{-1/2} - (Z_{N, \mathcal G}^\mathrm{sample})^{-1/2} |^2 \right) \\
&\leq \mathbb{E}_{\nu_N^\mathcal G} \left( Z_{N,\mathcal G}^\mathrm{sample} \max(Z^{-3},(Z_{N,\mathcal G}^\mathrm{sample})^{-3}) |Z - Z_{N,\mathcal G}^\mathrm{sample}|^2 \right).
\end{align*}
By Jensen's inequality and the same argument as above, we have
\begin{align*}
|Z - Z_{N,\mathcal G}^\mathrm{sample}|^2 &= \left| \int_X \left(\exp \big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G (u) \right\|^2 \big) - \exp \big(-\frac{1}{4 \sigma_\eta^2} \left\| y - \mathcal G_N (u) \right\|^2 \big)\right) \mu_0(\mathrm{d}u) \right|^2 \\
&\leq \frac{1}{\sigma_\eta^4} \int_X \left(\left\| y - \mathcal G_N (u) \right\| + \left\| y - \mathcal G (u) \right\|\right)^2 \|\mathcal G_N (u) - \mathcal G (u)\|^2 \mu_0(\mathrm{d}u).
\end{align*}
Together with Tonelli's Theorem and H\"older's inequality with conjugate exponents $p=(1+\delta)/ \delta$ and $q = 1+ \delta$, we then have
\begin{align*}
&\frac{1}{2} II \\
&\leq \frac{1}{\sigma_\eta^4} \mathbb{E}_{\nu_N^\mathcal G} \left( Z_{N,\mathcal G}^\mathrm{sample} \max(Z^{-3},(Z_{N,\mathcal G}^\mathrm{sample})^{-3}) \int_X \left(\left\| y - \mathcal G_N (u) \right\| + \left\| y - \mathcal G (u) \right\|\right)^2 \|\mathcal G_N (u) - \mathcal G (u)\|^2 \mu_0(\mathrm{d}u) \right) \\
&= \frac{1}{\sigma_\eta^4} \int_X \mathbb{E}_{\nu_N^\mathcal G} \left( Z_{N,\mathcal G}^\mathrm{sample} \max(Z^{-3},(Z_{N,\mathcal G}^\mathrm{sample})^{-3}) ( \left\| y - \mathcal G_N (u) \right\| + \left\| y - \mathcal G (u) \right\|)^2 \|\mathcal G_N (u) - \mathcal G (u)\|^2 \right) \mu_0(\mathrm{d}u) \\
&\leq \frac{1}{\sigma_\eta^4} \sup_{u \in X} \Big( \mathbb{E}_{\nu_N^\mathcal G} \left( (Z_{N,\mathcal G}^\mathrm{sample})^{(1+\delta)/ \delta} \max(Z^{-3},(Z_{N,\mathcal G}^\mathrm{sample})^{-3})^{(1+\delta)/ \delta} \left( \| y - \mathcal G(u)\| + \| y - \mathcal G_N(u) \|\right)^{2(1+\delta) / \delta})\right) \Big)^{\delta/(\delta+1)} \\
& \qquad \int_X \Big( \mathbb{E}_{\nu_N^\mathcal G} \left( \|\mathcal G_N(u) - \mathcal G(u)\|^{2(1+\delta)} \right) \Big)^{1/(1+\delta)} \mu_0(\mathrm{d}u),
\end{align*}
for any $\delta > 0$. The supremum in the bound above can be bounded independently of $U$ and $N$ by {Lemma} \ref{thm:bound_zsample} and Fernique's Theorem. The first claim of the Theorem then follows.
The proof for $\mu^{y,N,\Phi}_\mathrm{sample}$ is similar. Using an identical corresponding splitting of the Hellinger distance $\mathbb{E}_{\nu_N^\Phi} \left(2 \; d_{\mbox {\tiny{\rm Hell}}}^2(\mu^y, \mu^{y,N,\Phi}_\mathrm{sample}) \right) \leq I + II$, we bound the first term by
Tonelli's Theorem and the local Lipschitz continuity of the exponential function:
\begin{align*}
\frac{Z}{2} I = \int_X \mathbb{E}_{\nu_N^\mathcal G} \left( \left( \exp \big(-\Phi(u)/2 \big) - \exp \big(-\Phi_N(u)/2 \big)\right)^2 \right) \mu_0(\mathrm{d}u) \leq \left\|\Big(\mathbb{E}_{\nu_N^\mathcal G} \Big((\Phi_N - \Phi)^{2} \Big) \Big)^{1/2}\right\|^2_{L^2_{\mu_0}(X)}.
\end{align*}
For the second term, we have as before
\begin{align*}
\frac{1}{2} II \leq \mathbb{E}_{\nu_N^\Phi} \left( Z_{N,\Phi}^\mathrm{sample} \max(Z^{-3},(Z_{N,\Phi}^\mathrm{sample})^{-3}) |Z - Z_{N,\Phi}^\mathrm{sample}|^2 \right).
\end{align*}
and
\begin{align*}
|Z - Z_{N,\Phi}^\mathrm{sample}|^2 &= \left| \int_X \left(\exp \big(-\Phi(u) \big) - \exp \big(-\Phi_N(u) \big)\right) \mu_0(\mathrm{d}u) \right|^2 \leq 4\int_X (\Phi(u) - \Phi_N(u))^2 \mu_0(\mathrm{d}u).
\end{align*}
Together with Tonelli's Theorem and H\"older's inequality with conjugate exponents $p=(1+\delta)/ \delta$ and $q = 1+ \delta$, we then have
\begin{align*}
\frac{1}{2} II &\leq 4 \mathbb{E}_{\nu_N^\Phi} \left( Z_{N,\Phi}^\mathrm{sample} \max(Z^{-3},(Z_{N,\Phi}^\mathrm{sample})^{-3}) \int_X (\Phi(u) - \Phi_N(u))^2 \mu_0(\mathrm{d}u) \right) \\
&= 4 \int_X \mathbb{E}_{\nu_N^\Phi} \left( Z_{N,\Phi}^\mathrm{sample} \max(Z^{-3},(Z_{N,\Phi}^\mathrm{sample})^{-3}) (\Phi(u) - \Phi_N(u))^2 \right) \mu_0(\mathrm{d}u) \\
&\leq 4 \Big(\mathbb{E}_{\nu_N^\mathcal G} \left( (Z_{N,\Phi}^\mathrm{sample})^{(1+\delta)/ \delta} \max(Z^{-3},(Z_{N,\Phi}^\mathrm{sample})^{-3})^{(1+\delta)/ \delta} \right) \Big)^{\delta/(\delta+1)} \\
& \qquad \int_X \Big(\mathbb{E}_{\nu_N^\Phi} \left( \|\Phi(u) - \Phi_N(u)\|^{2(1+\delta)} \right) \Big)^{1/(1+\delta)} \mu_0(\mathrm{d}u),
\end{align*}
for any $\delta > 0$. The first expected value in the bound above can be bounded independently of $U$ and $N$ by {Lemma} \ref{thm:bound_zsample}. The second claim of the Theorem then follows.
\end{proof}
Similar to Theorem \ref{thm:hell_mean} and Theorem \ref{thm:hell_marginal}, Theorem \ref{thm:hell_sample} provides error bounds for general Gaussian process emulators of $\mathcal G$ and $\Phi$. As a particular example, we can take the emulators defined by \eqref{eq:pred_eq}. {We can now combine Assumption \ref{ass:reg}, Theorem \ref{thm:hell_sample} and Proposition \ref{prop:mean_conv} with $\beta = 0$ to derive error bounds in terms of the fill distance.}
\begin{corollary}\label{cor:rate_sample} Suppose $\mathcal G_N$ and $\Phi_N$ are defined as in \eqref{eq:pred_eq}, with Mat\`ern kernel $k=k_{\nu,\lambda,\sigma_k^2}$. Suppose Assumption \ref{ass:reg} { holds with $s=\nu + K/2$,} and the assumptions of Proposition \ref{prop:mean_conv} and Theorem \ref{thm:hell_sample} are satisfied. Then there exist constants $C_1, C_2, C_3$ and $C_4$, independent of $U$ and $N$, such that
\begin{align*}
d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal}) \leq C_1 h_U^{\nu + K/2} + C_2 h_U^\nu, \qquad
\text{and} \quad d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N, \Phi}_\mathrm{marginal}) \leq C_3 h_U^{\nu + K/2} + C_4 h_U^\nu.
\end{align*}
\end{corollary}
\begin{proof} The proof is similar to that of Corollary \ref{cor:rate_marginal}, exploiting that for a Gaussian random variable $X$, we have $\mathbb{E}((X- \mathbb{E}(X))^4) = 3 (\mathbb{E}((X-\mathbb{E}(X))^2))^2$.
\end{proof}
{If Assumption \ref{ass:reg} holds only for some $s < \nu + K/2$, an analogue of Corollary \ref{cor:rate_sample} can be proved using Proposition \ref{prop:mean_conv_int} with $\beta = 0$.}
We furthermore have the following result on a generalised total variation distance \cite{rh15}, defined by
\[
d_\mathrm{gTV}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{sample}) = \sup_{\|f\|_{C^0(X)}\leq 1} \Big( \mathbb{E}_{\nu^\mathcal G_N} \big( |\mathbb{E}_{\mu^y}(f) - \mathbb{E}_{\mu^{y,N,\mathcal G}_\mathrm{sample}} (f)|^2 \big) \Big)^{1/2},
\]
for $\mu^{y,N,\mathcal G}_\mathrm{sample}$, and defined analogously for $\mu^{y,N,\Phi}_\mathrm{sample}$.
\begin{theorem}\label{thm:gtv_sample} Under the Assumptions of {Lemma} \ref{thm:bound_zsample}, there exist constants $C_1$ and $C_2$, independent of $U$ and $N$, such that
\begin{align*}
d_\mathrm{gTV}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{sample}) &\leq C_1 \; \left\| \Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\mathcal G - \mathcal G_N\|^{2+ \delta} \Big) \Big)^{1/(2+\delta)}\right\|_{L^2_{\mu_0}(X)}, \\
d_\mathrm{gTV}(\mu^y, \mu^{y,N,\Phi}_\mathrm{sample}) &\leq C_2 \; \left\|\Big(\mathbb{E}_{\nu_N^\mathcal G} \Big(\|\Phi - \Phi_N \|^{2} \Big)\Big)^{1/2}\right\|_{L^2_{\mu_0}(X)}.
\end{align*}
\end{theorem}
\begin{proof} We give the proof for $\mu^{y,N,\Phi}_\mathrm{sample}$; the proof for $\mu^{y,N,\mathcal G}_\mathrm{sample}$ is identical. By definition, we have
\begin{align*}
&d_\mathrm{gTV}(\mu^y, \mu^{y,N,\Phi}_\mathrm{sample}) = \sup_{\|f\|_{C^0(X)}\leq 1} \Big( \mathbb{E}_{\nu^\Phi_N} \big( |\mathbb{E}_{\mu^y}(f) - \mathbb{E}_{\mu^{y,N,\Phi}_\mathrm{sample}} (f)|^2 \big) \Big)^{1/2} \\
&= \sup_{\|f\|_{C^0(X)}\leq 1} \left( \mathbb{E}_{\nu^\Phi_N} \left( \left|\int_X f(u) \left(\exp(-\Phi(u)) Z^{-1} - \exp (-\Phi_N(u)) (Z_{N,\Phi}^\mathrm{sample})^{-1} \right) \mu_0(\mathrm{d}u)\right|^2 \right) \right)^{1/2} \\
&\leq \left( \mathbb{E}_{\nu^\Phi_N} \left( \left|\int_X \left(\exp(-\Phi(u)) Z^{-1} - \exp (-\Phi_N(u)) (Z_{N,\Phi}^\mathrm{sample})^{-1} \right) \mu_0(\mathrm{d}u)\right|^2 \right) \right)^{1/2} \\
&\leq \frac{2}{Z} \left( \mathbb{E}_{\nu^\Phi_N} \left( \int_X |\exp(-\Phi(u)) - \exp (-\Phi_N(u))|^{2} \mu_0(\mathrm{d}u) \right) \right)^{1/2} + \\
&\qquad \left( \mathbb{E}_{\nu^\Phi_N} \left( (Z_{N,\Phi}^\mathrm{sample})^{2} | Z^{-1} - (Z_{N,\Phi}^\mathrm{sample})^{-1}|^2 \right) \right)^{1/2} \\
&=: I + II.
\end{align*}
The terms $I$ and $II$ can be bounded by the same arguments as the terms $I$ and $II$ in the proof of Theorem \ref{thm:hell_sample}, by noting that $| Z^{-1} - (Z_{N,\Phi}^\mathrm{sample})^{-1}|^2 \leq \max(Z^{-4},(Z_{N,\Phi}^\mathrm{sample})^{-4}) |Z - Z_{N,\Phi}^\mathrm{sample}|^2$.
\end{proof}
\section{Numerical Examples}\label{sec:num}
{We consider the model inverse problem of determining the diffusion coefficient of an elliptic partial differential equation (PDE) in divergence form from observation of a finite set of noisy continuous functionals of the solution.} This type of equation arises, for example, in the modelling of groundwater flow in a porous medium. We consider the one-dimensional model problem
{\begin{equation}\label{eq:mod}
-\frac{\mathrm d}{\mathrm{d} x} \Big(\kappa(x;u) \frac{\mathrm d p}{\mathrm d x} (x;u)\Big) = 1 \quad \text{in } (0,1), \qquad p(1;u) = p(0;u) = 0,
\end{equation}}
where the coefficient $\kappa$ depends on parameters $u = \{u_j\}_{j=1}^K \in [-1,1]^K$ through the linear expansion
{\[
\kappa(x;u) = \frac{1}{100} + \sum_{j=1}^K \frac{u_j}{200 (K+1)} \sin(2 \pi j x).
\]}
{In this setting the forward map $G : [-1,1]^K \rightarrow H^1_0(D)$, defined by $G(u) = p$, is an analytic function \cite{cohen2011analytic}. Since the observation operator $\mathcal O$ is linear and bounded, Assumption \ref{ass:reg} is satisfied for any $s > K/2$.
Unless stated otherwise, we will throughout this section approximate the solution $p$ by standard, piecewise linear, continuous finite elements on a uniform grid with mesh size $h=1/32$. The corresponding approximate forward map, denoted by $G_h$, is also an analytic function of $u$ \cite{cohen2011analytic}, and Assumption \ref{ass:reg} is satisfied for any $s > K/2$ also for $G_h$. By slight abuse of notation, we will denote the posterior measure corresponding to the forward map $G_h$ by $\mu^{y}$, and use this as our reference measure. The error induced by the finite element approximation will be ignored.}
As prior measure $\mu_0$ on $[-1,1]^K$, we use the uniform product measure $\mu_0(\mathrm{d}u) = \bigotimes_{j =1}^K \frac{\mathrm d u_j}{2}$.
The observations $y$ are taken as noisy point evaluations of the solution, $y_j = p(x_j; u^*) + \eta_j$ with $\eta \sim \mathcal N(0,I)$ and $\{x_j\}_{j=1}^J$ evenly spaced points in $(0,1)$. To generate $y$, the truth $u^*$ was chosen as a random sample from the prior, and the solution $p$ was approximated by finite elements on a uniform grid with mesh size $h^*=1/1024$.
The emulators $\mathcal G_N$ and $\Phi_N$ are computed as described in section \ref{ssec:gp_sk}, with mean and covariance kernel given by \eqref{eq:pred_eq}. In the Gaussian process prior \eqref{eq:gp}, we choose $m \equiv 0$ and $k = k_{\nu,1,1}$, a Mat\`ern kernel with variance $\sigma_k^2=1$, correlation length $\lambda=1$ and smoothness parameter $\nu$.
For a given approximation $\mu^{y,N}$ to $\mu^{y}$, we will compute twice the Hellinger distance squared,
{\begin{equation*}
2 d_{\mbox {\tiny{\rm Hell}}}(\mu^{y}, \mu^{y,N})^2 = \int_{[-1,1]^K} \left(\sqrt{\frac{d \mu^{y}}{d \mu_0}}(u) - \sqrt{\frac{d \mu^{y,N}}{d \mu_0}}(u) \right)^2 d \mu_0(u).
\end{equation*}}
The integral over $[-1,1]^K$ is approximated by a randomly shifted lattice rule with product weight parameters $\gamma_j=1/j^2$ \cite{niederreiter}. The generating vector for the rule used is available from Frances Kuo's website (\texttt{http://web.maths.unsw.edu.au/$\sim$fkuo/}) as ``lattice-39102-1024-1048576.3600''. For the marginal and random approximations, the expected value over the Gaussian process is approximated by Monte Carlo sampling, using the MATLAB command \texttt{mvnrnd}.
For the design points $U$, we choose a uniform tensor grid. In $[-1,1]^K$, the uniform tensor grid consisting of $N = N_*^K$ points, for some $N_* \in \mathbb N$, has fill distance $h_U = \sqrt{K} (N_* - 1)^{-1}$. In Table \ref{tbl:conv}, we give the convergence rates in $N$ for $\sup_{u \in X} | f(u) - m_N^f(u)|^2$ and $\| f - m_N^f\|_{L^2(X)}^2$ predicted by Proposition \ref{prop:mean_conv}.
\begin{table} [p]
\begin{center}
\renewcommand{1.25}{1.25}
\begin{tabular}{ |c|| c c c c||c|| c c c c|} \hline
\multicolumn{5}{|c||}{$\sup_{u \in X} | f(u) - m_N^f(u)|^2$} &\multicolumn{5}{|c|}{$\| f - m_N^f\|_{L^2(X)}^2$} \\ \hline
\backslashbox{$\nu$}{$K$}& 1& 2& 3& 4 & \backslashbox{$\nu$}{$K$}& 1& 2& 3& 4\\ \hline
1& 2& 1& 0.67& 0.5 & 1& 3& 2& 1.7& 1.5\\
5& & 5& 3.3& & 5& & 6& 4.3& \\ \hline
\end{tabular}
\end{center}
\caption{Convergence rates in $N$ predicted by Proposition \ref{prop:mean_conv} for uniform tensor grids.}
\label{tbl:conv}
\end{table}
\begin{table} [p]
\begin{center}
\renewcommand{1.25}{1.25}
\begin{tabular}{ |c|| c c ||c|| c c ||c|| c c|} \hline
\multicolumn{3}{|c||}{$\mu^{y,N,\mathcal G}_\mathrm{mean}$} &\multicolumn{3}{|c|}{$\mu^{y,N,\mathcal G}_\mathrm{marginal}$} &\multicolumn{3}{|c|}{$\mu^{y,N,\mathcal G}_\mathrm{sample}$}\\ \hline
\backslashbox{$\nu$}{$K$}& 2& 3 & \backslashbox{$\nu$}{$K$}& 2& 3 & \backslashbox{$\nu$}{$K$}& 2& 3\\ \hline
1& 2.6& 2.4& 1& 2.6 & 2.2& 1& 2.3& 1.7 \\
5& 6.2& 4.5& 5 & 6.2& 4.6 & 5& 6.1& 4.4\\ \hline
\end{tabular}
\end{center}
\caption{Observed convergence rates in $N$ of $d_{\mbox {\tiny{\rm Hell}}}(\mu^{y}, \mu^{y,N,\mathcal G})^2$, as shown in Figures \ref{fig:mean}, \ref{fig:marg} and \ref{fig:rand}.}
\label{tbl:obs_conv_G_1}
\end{table}
\begin{table} [p]
\begin{center}
\renewcommand{1.25}{1.25}
\begin{tabular}{ |c|| c c ||c|| c c ||c|| c c|} \hline
\multicolumn{3}{|c||}{$\mu^{y,N,\Phi}_\mathrm{mean}$} &\multicolumn{3}{|c|}{$\mu^{y,N,\Phi}_\mathrm{marginal}$} &\multicolumn{3}{|c|}{$\mu^{y,N,\Phi}_\mathrm{sample}$}\\ \hline
\backslashbox{$\nu$}{$K$}& 2& 3 & \backslashbox{$\nu$}{$K$}& 2& 3 & \backslashbox{$\nu$}{$K$}& 2& 3\\ \hline
1& 2.5& 2& 1& 1.8 & 1.1& 1& 1.1& 0.76 \\
5& 5.4& 3.8& 5 & 4.9& 3.2 & 5& 4.9& 3.3\\ \hline
\end{tabular}
\end{center}
\caption{Observed convergence rates in $N$ of $d_{\mbox {\tiny{\rm Hell}}}(\mu^{y}, \mu^{y,N,\Phi})^2$, as shown in Figures \ref{fig:mean}, \ref{fig:marg} and \ref{fig:rand}.}
\label{tbl:obs_conv_Phi_1}
\end{table}
\begin{table} [p]
\begin{center}
\renewcommand{1.25}{1.25}
\begin{tabular}{ |c|| c c c c||c|| c c c c|} \hline
\multicolumn{5}{|c||}{$\mu^{y,N,\mathcal G}_\mathrm{mean}$} &\multicolumn{5}{|c|}{$\mu^{y,N,\Phi}_\mathrm{mean}$} \\ \hline
\backslashbox{$\nu$}{$K$}&1 & 2& 3 & 4& \backslashbox{$\nu$}{$K$}&1 & 2& 3 & 4\\ \hline
1& 4.1& 2.7& 2.3& 2.3 & 1& 4& 2.7& 2.1& 1.9 \\ \hline
\end{tabular}
\end{center}
\caption{Observed convergence rates in $N$ of $d_{\mbox {\tiny{\rm Hell}}}(\mu^{y}, \mu^{y,N,\Phi})^2$ and $d_{\mbox {\tiny{\rm Hell}}}(\mu^{y}, \mu^{y,N,\mathcal G})^2$, as shown in Figure \ref{fig:mean_15}.}
\label{tbl:obs_conv_15}
\end{table}
\subsection{Mean-based approximations}
In Figure \ref{fig:mean}, we show $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{mean})^2$ (left) and $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{mean})^2$ (right), for a variety of choices of $K$ and $\nu$, for $J=1$. For each choice of the parameters $K$ and $\nu$, we have as a dotted line added the least squares fit of the form $C_1 N^{-C_2}$, for some $C_1, C_2 > 0$, and indicated the rate $N^{-C_2}$ in the legend. { The observed rates $C_2$ are also summarised in Tables \ref{tbl:obs_conv_G_1} and \ref{tbl:obs_conv_Phi_1}.} By Corollary \ref{cor:rate_mean}, we expect to see the faster convergence rates in the right panel of Table \ref{tbl:conv}. {For convenience, we have added these rates in parentheses in the legends in Figure \ref{fig:mean}}. For $\mu^{y,N,\mathcal G}_\mathrm{mean}$, we observe the rates in Table \ref{tbl:conv}, or slightly faster. For $\mu^{y,N,\Phi}_\mathrm{mean}$, we observe rates slightly faster than predicted for $\nu=1$, and slightly slower than predicted for $\nu=5$. Finally, we remark that though the convergence rates of the error are slightly slower for $\mu^{y,N,\Phi}_\mathrm{mean}$, the actual errors are smaller for $\mu^{y,N,\Phi}_\mathrm{mean}$.
In Figure \ref{fig:mean_15}, we again show $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{mean})^2$ (left) and $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{mean})^2$ (right), for a variety of choices of $K$, with $J=15$ and $\nu=1$. { The observed convergence rates are summarised in Table \ref{tbl:obs_conv_15}.} We again observe convergence rates slightly faster than the rates predicted in the right panel of Table \ref{tbl:conv}. As in Figure \ref{fig:mean}, we observe that the errors in $\mu^{y,N,\Phi}_\mathrm{mean}$ are smaller, though the rates of convergence are slightly faster for $\mu^{y,N,\mathcal G}_\mathrm{mean}$.
\subsection{Marginal approximations}
In Figure \ref{fig:marg}, we show $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{marginal})^2$ (left) and $2 d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{marginal})^2$ (right), for a variety of choices of $K$ and $\nu$, for $J=1$. For each choice of the parameters $K$ and $\nu$, we have again added the least squares fit of the form $C_1 N^{-C_2}$, and indicated the rate $C_2$ in the legend. { The observed rates $C_2$ are also summarised in Tables \ref{tbl:obs_conv_G_1} and \ref{tbl:obs_conv_Phi_1}.} By Corollary \ref{cor:rate_marginal}, we expect the error to be the sum of two contributions, one of which decays at the rate indicated in the left panel of Table \ref{tbl:conv}, and another which decays at the rate indicated by the right panel of Table \ref{tbl:conv}. {For convenience, we have added these rates in parentheses in the legends in Figure \ref{fig:marg}}.For $\mu^{y,N,\mathcal G}_\mathrm{marginal}$, we observe the faster convergence rates in the right panel of Table \ref{tbl:conv}, although a closer inspection indicates that the convergence is slowing down as $N$ increases. For $\mu^{y,N,\mathcal G}_\mathrm{marginal}$, the observed rates are somewhere between the two rates predicted by Table \ref{tbl:conv}.
\subsection{Random approximations}
In Figure \ref{fig:rand}, we show $2 \mathbb{E}_{\nu_N^\mathcal G} (d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\mathcal G}_\mathrm{sample})^2)$ (left) and $2 \mathbb{E}_{\nu_N^\Phi} (d_{\mbox {\tiny{\rm Hell}}}(\mu^y, \mu^{y,N,\Phi}_\mathrm{sample})^2)$ (right), for a variety of choices of $K$ and $\nu$, for $J=1$. For each choice of the parameters $K$ and $\nu$, we have again added the least squares fit of the form $C_1 N^{-C_2}$, and indicated the rate $C_2$ in the legend. { The observed rates $C_2$ are also summarised in Tables \ref{tbl:obs_conv_G_1} and \ref{tbl:obs_conv_Phi_1}.} By Corollary \ref{cor:rate_sample}, we expect the error to be the sum of two contributions, as for the marginal approximations considered in the previous section, {and the corresponding rates from Table \ref{tbl:conv} have been added in parentheses in the legends}. For $\mu^{y,N,\mathcal G}_\mathrm{sample}$, we again observe the faster convergence rates in the right panel of Table \ref{tbl:conv}, but the convergence again seems to be slowing down as $N$ increases. For $\mu^{y,N,\mathcal G}_\mathrm{marginal}$, the observed rates are very close to the slower rates in the left panel of Table \ref{tbl:conv}.
\begin{figure}
\caption{$2 d_{\mbox {\tiny{\rm Hell}
\label{fig:mean}
\end{figure}
\begin{figure}
\caption{$2 d_{\mbox {\tiny{\rm Hell}
\label{fig:mean_15}
\end{figure}
\begin{figure}
\caption{$2 d_{\mbox {\tiny{\rm Hell}
\label{fig:marg}
\end{figure}
\begin{figure}
\caption{$2 \mathbb{E}
\label{fig:rand}
\end{figure}
\section{Conclusions and further work}\label{sec:conc}
Gaussian process emulators are frequently used as surrogate models. In this work, we analysed the error that is introduced in the Bayesian posterior distribution when a Gaussian process emulator is used to approximate the forward model, either in terms of the parameter-to-observation map or the negative log-likelihood. We showed that the error in the posterior distribution, measured in the Hellinger distance, can be bounded in terms of the error in the emulator, measured in a norm dependent on the approximation considered.
An issue that requires further consideration is the efficient emulation of vector-valued functions. A simple solution, employed in this work, is to emulate each entry independently. In many applications, {however, it is natural} to assume that the entries are correlated, and a better emulator could be constructed by including this correlation in the emulator. {Furthermore}, there are still a lot of open questions about how to do this optimally \cite{bzkl13}. {Also the question of scaling the Gaussian process methodology
to high dimensional input spaces remains open. The current error bounds from scattered data approximation employed in this paper feature a strong dependence on the input dimension $K$, yielding poor convergence estimates in high dimensions.}
{Another important issue is the selection of the design points used to construct the Gaussian process emulator, also known as experimental design. In applications where the posterior distribution concentrates with respect to the prior, it might be more efficient to choose design points that are somehow adapted to the posterior measure instead of space-filling designs that have a small fill distance. For example, we could use the sequential designs in \cite{sn16}. It would be interesting to prove suitable error bounds in this case, maybe using ideas from \cite{ws93}.}
{ In practical applications of Gaussian process emulators, such as in \cite{hkccr04}, the derivation of the emulator is often more involved than the simple approach presented in section \ref{sec:gp}. The hyper-parameters in the covariance kernel of the emulator are often unknown, and there is often a discrepancy between the mathematical model of the forward map and the true physical process, known as model error. These are both important issues for which the assumptions in our error bounds have not yet been verified. }
\end{document} | math |
एमपी शादीशुदा युवक नव विवाहित युवती ने मंदिर पर पहुंचकर जहर खाकर आत्महत्या की
सीहोर/बुदनी। नर्मदा ब्रिज के पास मंगलवार देर रात ग्वाडिया के शादीशुदा युवक और एक नव विवाहिता ने नदी के बीच स्थित शिव मंदिर पर पहुंचकर जहर खाकर आत्महत्या कर ली। पुलिस के अनुसार कन्हैया कीर
मंदिर के बाहर भिखारियों को पैसे देना दिग्विजय को पडा भारी, चुनाव आयोग ने भेजा नोटिस
आज की चिट्ठी सीहोर मध्य प्रदेश के पूर्व मुख्यमंत्री दिग्विजय सिंह ने कथित तौर पर पिछले दिनों सीहोर में एक मंदिर के बाहर भिखारियों को पैसे बांटे थे। इसी मामले को लेकर भारतीय जनता पार्टी (बीजेपी) ने चुनाव
शिवराज मामा ने भांजे-भांजियों की फरमाइश पर गाया गाना
मध्यप्रदेश के पूर्व सीएम शिवराज सिंह चौहान बच्चों के प्रति प्रेम का इजहार करने खुद अपनी सुरक्षा व्यवस्था को भंग कर सीहोर पहुंचे, जहां शिवराज सिंह चौहान पैदल ही बच्चों और बच्चियों से मिलने उनके होस्टल पहुंचे.
अब सीहोर मे भाजपा विधायक की गाड़ी पर लगे हुटर को लेकर विधायक पुलिस के बीच हुई हुज्जत
सीहोर. ग्वालियर मे भाजपा सांसद की गाड़ी का चालान बनाए जाने के बाद नया मामला सीहोर मे सामने आया है, जहां पर हूटर और भाजपा की नेम प्लेट लगी गाड़ी से भोपाल जा रहे खातेगांव विधायक | hindi |
Well, Fringe can be a liiiittle creepy, but it is THE BEST THING SINCE SLICED BREAD. It's about abnormal events happening and the FBI is investigating them. Agent Olivia Dunham is investigating these events, but she needs the help of mad scientist (HE IS SOOOO AWESOME) Dr. Walter Bishop. Buuut, he's in a mental institution, and she needs family for her to be able to visit him. So she needs to find Walter's son, Peter Bishop, who is a nomad living in Iraq.
So, @May_Otterview, and @Lila_Lightcraft, I've started watching Voltron with my siblings. We're somewhere in season three, I think.
I've started watching Voltron with my siblings.
awesome girl tell me what you think about it! btw I'm dressing up as a character from Voltron for Holloween!
awesome girl tell me what you think about it!
btw I'm dressing up as a character from Voltron for Holloween!
Cool! @Peregrine_Appleclock inspired me with her Scarlet man outfit, but I don't know if I'll have time to pull one together.
I haven't, actually. But I don't mind spoilers, really.
I think my favorite is Shiro...right now.
Yep, Shiro is still my favorite! His new arm though . . . I'm so glad they didn't just kill off his character so Alura could be a Paladin, I'd really hate it if they did that!
Yes, she is my sister . . . squints suspiciously I have no clue why she though I changed it.
Quit watching The Flash believe it or not. . .
I'm loving Last Man Standing though. Tim Allen is gold (but also old). Not liking Mandy's actor change.
Hmmm. I've seen a few episodes of that.
It's not a good show!!!!!!
It's crazy! We were watching one of the episodes a few days ago and we thought that CW might be doing better, then they ruined the whole episode. | english |
Need a pre-purchase inspection? Our highly-qualified inspector will make a comprehensive inspection and provide a detailed report containing everything you need to know. The best prices and most qualified inspectors in Perth, book now to buy your new house with confidence!
Property purchase is probably one of the biggest investments you will make and our easy-to-read property condition reports are designed to help you in making the right buying decision. Our property reports are drafted according to Australian Standards and are recognised both by estate agents and lawyers. Our report will definitely give you complete peace of mind and assurance about buying a particular property and it can even help you avoid making a bad deal that could cost you thousands for costly repairs.
The property you are considering may look really great and you may be told that it is in excellent condition. However, in reality, the freshly painted areas may be concealing flaws and rotten frames, which will require costly repairs and maintenance. So, it is very important to get an industry expert to give you a second opinion on the property.
Our objective is to safeguard your investment and to this end, our property inspector will lookout for cover ups by the homeowner and defects to assess the actual level of damage. Our inspectors will bring along specialist equipment, ladders and powerful torches to inspect all the areas of the property which are visible and they will also check the insides of the walls for moisture, leaks and termites by making use of thermal and radar equipment.
Several of the other inspection service providers charge you extra for all these services; however, these are all included in our base fixed cost.
Typically, pest inspection involves tapping, drilling and even tearing out portions of the walls. This is not only extremely messy out very disruptive too and often deeply housed nesting areas are completely missed. However, the state-of-the-art equipment and technology used by our inspectors can detect the presence of pests without causing any damage to your property. They can look through the floors, walls and ceilings without leaving a trace by making use of a combination of moisture, radar and thermal technology.
All our RBI inspectors are trained to handle Termatrac equipment — that is the only device which can detect and pinpoint termite presence accurately — without the need to penetrate the floors, ceilings and walls physically. Our inspector will inspect all the accessible and visible portions of the property for current and likely nesting hotspots.
Our condition reports are drafted is simple English and are prepared as per the current industry standards. All the concern areas will be highlighted in the report and we will also pinpoint all the defects and hazards, which may require expensive repairs. Along with the report, we will also provide photographs and also our suggestions on the work needed to rectify the problems.
The property condition reports drafted by us are according to Australian Standards and are recognised both by estate agents, as well as, solicitors. And, if we discover any problem areas which can compromise your investment, you can make use of our report to better your position to negotiate with the seller or use it as grounds to end your contract. This essentially means that you can make your decision to purchase any property knowing that it has been verified by experts.
Why Choose Rapid Building Inspections in Perth?
Our inspectors follow the Australian best practice in pest inspection and are trained and qualified according to the Australia inspection standards AS4349.1-2010 and AS4349.3. We are authorised to conduct pest inspections in the Perth area.
Our prices are low because we keep the interactions between the property seller and RBI inspector to the minimum. And for this, we require access to the property at the appointed date and time. Coordinating for access to the property can be very time consuming and we cannot have our inspectors calling the sellers and agents to arrange all the details.
We guarantee to provide the property condition report of the highest quality; however, for that, we would require that you confirm access to the property at the particular time. However, if you are unable to get our inspector time for the property inspection, we would request that you give us a 24-hour advance notice so that we are able to reschedule your booking. | english |
Mr. Lanning is a member of Hudsonville Protestant Reformed Church and a science teacher at Covenant Christian High School (GR).
Creationism—Vs. the Testimony of "Science"
Mr. Lanning is head of the science department in Covenant Christian High School in Grand Rapids, Michigan. | english |
Sun-dried appassimento grapes give an intense, smooth and richly fruity red wine.
Beautifully presented and widely acclaimed Provençal rosé produced under the guidance of the Perrin family of Château Beaucastel.
A light and lively organic, 'vegan friendly' blend of Sicily's own Catarratto with Pinot Grigio.
90/100 Decanter Asia Wine Awards 2018 A Spanish classic. Fresh, fruity aromas and satisfying, gently spicy drinking.
A rare and delicious Blanc de Blancs from an emperor of Pinot Noir.
or choose 'Collect from Marina' when you buy online and pick up your order when it suits you.
An à la carte menu of BB favourites, available for delivery to your door. | english |
होम मनोरंजन बाहुबली २ द कंक्लूजन का भव्य ट्रेलर रिलीज
बाहुबली २ द कंक्लूजन का भव्य ट्रेलर रिलीज
बाय फंडाबुक टीम
एसएस राजामौली के निर्देशन में बनी फिल्म बाहुबली २ द कंक्लूजन का ट्रेलर गुरुवार को जारी कर दिया गया है. फिल्म निर्माताओं ने हालाँकि तय किया था कि आंध्रप्रदेश और तेलंगाना के करीब ३०० सिनेमाघरों में सुबह ९ बजे से ट्रेलर दिखाया जाएगा और फिर शाम पांच बजे के बाद इसे ऑनलाइन जारी किया जाएगा. लेकिन इसे गुरूवार सुबह ही यू-ट्यूब पर जारी कर दिया गया.
समाचार एजेंसी आईएएनएस की खबर के अनुसार ट्रेलर के आधिकारिक रिलीज से कुछ घंटों पहले ही फिल्म के तमिल वर्जन का ट्रेलर सोशल मीडिया पर लीक हो गया था. ट्रेलर लीक होने के बाद निर्माताओं ने कानूनी कार्रवाई का निर्णय लिया है.
ये रहा भव्य ट्रेलर
ट्रेलर लीक होने के बाद फिल्म तमिल, मलयालम, हिंदी और तेलुगु वर्जनों के ट्रेलर ऑनलाइन जारी कर दिए गए. २ मिनट २4 सेकंड का यह ट्रेलर काफी प्रभावी है और इसमें माहिष्मति साम्राज्य में अंतरकलह, बाहुबली और देवसेना के बीच प्रेम और बाहुबली की मौत की झलकियां दिखाई गई हैं. इस ट्रेलर में एक जगह बाहुबली कटप्पा से कहता है, जब तक तुम मेरे साथ हो मामा, मुझे कोई नहीं मार सकता. हालांकि दो साल पहले बाहुबली द बिगनिंग की रिलीज के बाद से ही दर्शकों के लिए यह एक बड़ा सवाल बना हुआ है कि कटप्पा ने बाहुबली को क्यों मारा?
ट्रेलर रिलीज होते ही बाहुबली २ट्विटर समेत सभी सोशल मीडिया साइट्स पर ट्रेंड करने लगा. ट्विटर यूजर्स ने फिल्म को जबरदस्त करार दिया है. वहीं आलिया भट्ट, वरुण धवन जैसे बॉलीवुड सितारों ने फिल्म की तारीफ की है. फिल्म के हिंदी वर्जन का निर्माण करण जौहर की धर्मा प्रोडक्शन ने किया है.
बाहुबली द कन्क्लूजन में प्रभास, राणा दग्गुबाती, अनुष्का शेट्टी, सत्यराज और राम्या कृष्णन पिछली फिल्म के अपने किरदारों में दोबारा नजर आएंगे. यह फिल्म २८ अप्रैल २०१७ को रिलीज़ होगी. जुलाई २०१५ में रिलीज हुई बाहुबली द बिगनिंग बहुत बड़ी हिट साबित हुई थी, फिल्म ने ६५० करोड़ का कारोबार किया था.
बाहुबली द कन्क्लूजन
नेक्स्ट आर्टियलकुछ ऐसी रोमांचक तस्वीरें जिन्हें आप बार-बार देखना चाहेंगे!!
अनुष्का-विराट ने रिसेप्शन में जम कर डाला भांगड़ा, मुहं में नोट दबा कर नाची नुश!
जो जींस और टी शर्ट पहनती है वो महिला कैसे हो सकती है: सेंसर बोर्ड
छाया आतिफ असलम का मुसाफिर गाना, यूट्यूब पर २ करोड़ से ज्यादा व्यूज!
जब हैरी मैट.. का हवाएं गाना रिलीज, हर १५ मिनट में देख रहे ४० हज़ार लोग!
हॉलीवुड- कुछ मशहूर फिल्मों के दिलचस्प तथ्य!
अब अवतार-२ सन २0२0 में रिलीज़ होगी, ४ सीक्वल बनेंगे! | hindi |
भारतीय पोपुलर यूट्युबर्स - यूट्यूब की दुनिया के ये ५ करोड़पति भारतीय यूट्यूबर
यूट्यूब की दुनिया के ये ५ करोड़पति भारतीय यूट्यूबर, जिसमें से २ के हैं १ करोड़ सबस्क्रिबर
भारतीय पोपुलर यूट्युबर्स इंटरनेट की दुनिया में आजकल सोशल मीडिया ने सबसे ज्यादा लोगों को अपनी तरफ आकर्षित किया है.
आज अधिकांश लोग अपने २४ में से १८ घंटे केवल सोशल मीडिया पर ही बिता देते हैं. जहां लोग फेसबुक, ट्विटर और इंस्टा पर पूरे दिन एक्टिवेट रहत हैं वहीं लोगों ने अपने मनोरंजन का साधन युतुबे को बना लिया है.
युतुबे, ऐसा प्लेटफॉर्म है जहां पर आपको गाने से लेकर मूवी तक और न्यूज से लेकर टीवी प्रोग्राम तक हर तरह की वीडियोज़ मिल सकती हैं. लेकिन जब से जियो ने ४जी सुविधा उपलब्ध कराई है लोगों ने युतुबे को टैलेंट शो करने का प्लेटफॉर्म बना लिया है. यहां हर दिन न जाने कितने चैनल्स खुलते हैं और न जाने कितने चैनल्स कॉपीराइट की वजह से बंद भी हो जाते हैं.
भारतीय पोपुलर यूट्युबर्स
युतुबे पर वीडियो बनाने का आइडिया उन्हें कश्मीर बाढ़ पर एक रिपोर्टर के असंवेदनशील प्रश्न पूछने पर आया था. जिसका वीडियो बनाकर उन्होंने युतुबे पर अपलोड किया था. जिसपर उन्हें लाखों व्यूज और हजारों कमेंट्स मिले थे. उसके बाद उन्होंने मास्टर जी के १२ पार्ट बनाकर लोगों को अपनी तरफ ऐसे आकर्षित किया कि रातों-रात स्टार बन गए. हालहि में उनकी शॉर्ट फिल्म प्लस-माइनस १४ सितंबर को रिलीज हुई है.
हरियाणा का छोरा अमित भड़ाना का युतुबे सफर फरवरी २०१७ से शुरु हुआ. और उन्होंने केवल १.५ साल में ही १0 मिलियन बनाकर सबको साबित कर दिया कि उनकी राइमिंग डायलॉग लोगों को काफी पसंद आती है. उनका कंटेंट काफी साफ-सुथरा और हरियाणवी भाषा में ही होता है. उनका यह अंदाज उनके फैंस को काफी पसंद आता है.
अमित ने युतुबे से पहले फेसबुक पर एक वीडियो का डुब्बड किया था. जिसको लोगों ने काफी पसंद किया था. उनके दोस्तों ने युतुबे पर उन्हें विन्स बनाने की सलाह दी. जिसके बाद उन्होने दोस्तों के साथ मिलकर युतुबे पर वीडियो बनाकर डालने का सफर शुरु किया. आज वह युतुबे से करोड़ों कमाते हैं.
ये है भारतीय पोपुलर यूट्युबर्स आज के टेक्नोलॉजी युग में यूट्यूब को भी आप अपना करियर ऑप्शन बना सकते हैं. अगर आपका टैलेंट है यूनिक, तो इस प्लेटफॉर्म पर दिखाएं अपने टैलेंट का जलवा और बन जाइए आप भी यूट्यूबर. हमें उम्मीद है कि आपको यह लेख पसंद आया होगा तो इसे रेटिंग देना न भूलें. साथ ही हमें नीचे दिये गए कमेंट् बॉक्स में हमें बताएं कि इनमें से आपका फेवरेट यूट्यूबर कौन सा है? | hindi |
Gallery Oldham opened in 2000, with an innovative way of telling the story of Oldham through five changing exhibition spaces. However, Oldham’s community, although they liked the temporary exhibitions, clung on to the idea of more permanent displays about Oldham’s story. At the same time, the award winning Coliseum Theatre, known for its community-led productions and engagement, was beginning to look for a new home. Julia Holberry Associates together with Levitt Bernstein Associates and Davis Langdon were hired to consider the options of a merger of the two organisations – Museum and Theatre – into a single building.
The project involved stakeholder and public consultation, comparator analysis, visioning and governance workshops, service and site options appraisal, analysing capital and revenue implications, scoring the options and recommending a preferred option. The recommended option was approved in June and a Round 1 Heritage Lottery Fund application was submitted at the beginning of August. | english |
\begin{document}
\title{The $v_1$-Periodic Region in the cohomology of the $\C$-motivic Steenrod algebra}
\author{Ang Li}
\maketitle
\begin{abstract}
We establish a $v_1$-periodicity theorem in $\Ext$ over the $\C$-motivic Steenrod algebra. The element $h_1$ of $\Ext$, which detects the homotopy class $\eta$ in the motivic Adams spectral sequence, is non-nilpotent and therefore generates $h_1$-towers. Our result is that, apart from these $h_1$-towers, $v_1$-periodicity operators give isomorphisms in a range near the top of the Adams chart. This result generalizes well-known classical behavior.
\end{abstract}
\section{Introduction}
\subsection{Background and Motivation}
One of the primary tools for computing stable homotopy groups of spheres is the Adams spectral sequence. The $E_2$-page of the Adams spectral sequence is given by $\Ext_{\cA^{cl}}^{*,*}(\F_2,\F_2)=H^{*,*}(\cA^{cl})$, which we denote by $\Ext_{cl}$, where $\cA^{cl}$ is the classical Steenrod algebra. For $\Ext_{cl}$, Adams \cite{Per} showed that there is a vanishing line of slope $\frac{1}{2}$ and intercept $\frac{3}{2}$, and J. P. May showed there is a periodicity line of slope $\frac{1}{5}$ and intercept $\frac{12}{5}$, where the periodicity operation is defined by the Massey product $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$. This result has not been published by May, but can be found in the thesis of Krause:
\begin{thm}\label{cpt}\cite[Theorem 5.14]{Kra}
For $r\geq 2$, the Massey product operation $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$ is uniquely defined on $\Ext_{cl}^{s,f}=H^{s,f}(\cA^{cl})$ when $s>0$ and $f>\frac{1}{2}s+3-2^r$, where $s$ is the stem, and $f$ is the Adams filtration.
Furthermore, for $f>\frac{1}{5}s+\frac{12}{5}$, the operation
\[P_r\colon H^{s,f}(\cA^{cl})\xrightarrow{\iso} H^{s+2^{r+1},f+2^r}(\cA^{cl})\]
is an isomorphism.
\end{thm}
The purpose of this article is to discuss an analog of the theorem above in the $\C$-motivic context. Motivic homotopy theory, also known as $\A^1$-homotopy theory, is a way to apply the techniques of algebraic topology, specifically homotopy, to algebraic varieties and, more generally, to schemes. The theory was formulated by Morel and Voevodsky \cite{MV}.\\
In this paper we analyze the case where the base field $F$ is the complex numbers $\C$. Let $\M_2$ denote the bigraded motivic cohomology ring of Spec $\C$, with $\F_2=\Z/2$-coefficients. Voevodsky \cite{Voe} proved that $\M_2\iso \F_2[\tau]$. Let $\cA$ be the mod 2 motivic Steenrod algebra over $\C$. The motivic Adams spectral sequence is a trigraded spectral sequence with \[E_2^{*,*,*}=\Ext_\cA^{*,*,*}(\M_2,\M_2),\] where the third grading is the motivic weight. (See Dugger and Isaksen \cite{DI1}). The $\C$-motivic $E_2$-page, which we denote by $\Ext$, has a vanishing line computed by Guillou and Isaksen \cite{GI1}. Quigley has a partial result for the motivic periodicity theorem in the case $r=2$ \cite[Corollary 5.4]{JD}.\\
The multiplication by 2 map $S^{0,0}\xrightarrow{2}S^{0,0}$ is detected by $h_0$, and the Hopf map $S^{1,1}\xrightarrow{\eta}S^{0,0}$ is detected by $h_1$ in $\Ext$. These elements have degrees $(0,1,0)$ and $(1,1,1)$ respectively. By an infinite $h_1$-tower we will mean a non-zero sequence of elements of the form $h_1^kx$ in $\Ext$ with $k\geq 0$, where $x$ is not $h_1$ divisible. We will write $h_1$-towers for infinite $h_1$-towers, and refer to $x$ as the base of the $h_1$-tower $h_1^kx$ ($k\geq 0$). A short discussion on the $h_1$-towers can be found in subsection \ref{future}. Since all $h_1$-towers are $\tau$-torsion, one might guess that the motivic $\Ext$ groups differ from the classical $\Ext^{cl}$ groups by only infinite $h_1$-towers. This is not true, but we may expect the $h_1$-torsion part of $\Ext$ to obtain a pattern similar to $\Ext^{cl}$. Our result pertains solely to this $h_1$-torsion region.
\begin{rmk}
Let $\cA\dual$ denote the dual Steenrod algebra. For $\Ext$, we can work over $\cA\dual$ instead of $\cA$. i.e. \[E_2^{*,*,*}\iso \Ext_{\cA\dual}^{*,*,*}(\M_2,\M_2)\dual.\]
Here we view $\M_2$ as the homology of the motivic sphere instead of the cohomology; this is an $\cA\dual$-comodule.\\
\end{rmk}
The goal of this paper is the following theorem:
\begin{thm}\label{mpt}
For $r\geq 2$, the Massey product operation $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$ is uniquely defined on $\Ext^{s,f,w}=H^{s,f,w}(\cA)$ when $s>0$ and $f>\frac{1}{2}s+3-2^r$.
Furthermore, for $f>\frac{1}{5}s+\frac{12}{5}$, the restriction of $P_r$ to the $h_1$-torsion
\[P_r\colon [H^{s,f,w}(\cA)]_{h_1-\text{torsion}}\to [H^{s+2^{r+1},f+2^r,w+2^r}(\cA)]_{h_1-\text{torsion}}\]
is an isomorphism.
\end{thm}
There are close connections between the classical Adams spectral sequence and the motivic Adams spectral sequence. For instance, by inverting $\tau$ in $\Ext$, we obtain $\Ext^{cl}$. There are also abundant connections between the $\C$-motivic $\Ext$ groups, the $\R$-motivic $\Ext$ groups and the $C_2$-equivariant $\Ext$ groups. The $\rho$-Bockstein spectral sequence \cite{MH} takes the $\C$-motivic $\Ext$ groups as input and computes the $\R$-motivic $\Ext$ groups. The $C_2$-equivariant $\Ext$ groups can then be obtained \cite{GHIR} by calculating $\R$-motivic $\Ext$ groups for a negative cone. Our periodicity results ought to be relevant for future computations in $\R$-motivic and $C_2$-equivariant homotopy theory.\\
\subsection{Further Considerations}\label{future}
We study the $h_1$-torsion part of $Ext$; the $h_1$-periodic part has been entirely computed in \cite{GI2}.
\begin{thm}\cite[Theorem 1.1]{GI2}\label{h1inv}
The $h_1$-inverted algebra $\Ext_\cA[h_1\inv]$ is a polynomial algebra over $\F_2[h_1^{\pm1}]$ on generators $v_1^4$ and $v_n$ for $n\geq 2$, where:
\begin{enumerate}
\item $v_1^4$ is in the $8$-stem and has Adams filtration $4$ and weight $4$.
\item $v_n$ is in the $(2^{n+1}-2)$-stem and has Adams filtration $1$ and weight $2^n-1$.
\end{enumerate}
\end{thm}
It is straightforward that $P_r$ acts injectively on the $h_1$-inverted $\Ext$; that is, $P_r$ sends an $h_1$-tower $h_1^kx$ ($k\geq 0$) to another $h_1$-tower $h_1^ly$ ($l\geq 0$). But the base $x$ might not be sent to the base $y$. As for surjectivity, there are $h_1$-towers not in the image of $P_r$, such as the $h_1$-tower on $c_0$; those are not multiples of $v_1^4$ in the $h_1$-inverted $\Ext$. Partial results about the bases of those $h_1$-towers can be found in \cite{HT}. We expect that the determination of the bases of the $h_1$-towers will lead to a complete understanding of the region in which the $v_1$-periodicity operator acts as an isomorphism on $Ext$.\\
There is another periodicity element $w_1$ in motivic $Ext$, which does not exist classically. Analogously to the Massey product $P_2(-):=\langle h_3, h_0^4,-\rangle$, there is another Massey product $g(-):=\langle h_4,h_1^4,-\rangle$. For many values of $x$, $P_2(x)$ is detected by $Px$, where $P=h_{20}^4$ has degree $(8,4,4)$ in the May spectral sequence. Similarly, for many values of $x$, $g(x)$ will be detected by $h_{21}^4\cdot x$, where $h_{21}^4$ has degree $(20,4,12)$ in the May spectral sequence. The obstruction to studying $w_1$-periodicity is that $g$ has a relatively low slope. Thus the method in this paper is not applicable. In addition, our method relies on a computation involving $\Ext_{\cA(1)\dual}$, but $g$ restricts to zero in that group. Thus a strategy for studying $g$-periodicity would need to begin with $\Ext_{\cA(2)\dual}$, which is
much more complicated \cite{Isa}.
\subsection{Organization}
We follow the approach of \cite{Kra} primarily. In Section 2, we briefly introduce the stable (co)module category, in which we can consider the $h_0$ or $h_1$-torsion part of $\Ext$ by taking sequential colimits. In Section 3, we establish the existence of a homological self-map $\theta$ and use this to show that $P_r(-)$ is uniquely defined. In Section 4, we explicitly show where $\theta$ is an isomorphism over $\cA(1)\dual$, and obtain a region where it is an isomorphism over $\cA\dual$ by moving along the Cartan-Eilenberg spectral sequence. In Section 5, we combine the results of the previous two sections together to get the motivic periodicity theorem \ref{mpt}.
\section{Working environment: the stable (co)module category $Stab(\Gamma)$}
In order to restrict to working with only the $h_1$-torsion (also $h_0$-torsion) part, first we would like to choose a suitable working environment: a category with some nice properties that will serve our purposes. Usually $\Ext$ is defined in the derived category of $\cA\dual$-comodules, which we denote $D(\cA\dual)$. However, the coefficient ring $\M_2$ is not compact in $D(\cA\dual)$, which means that $\M_2$ does not interact well with colimits. The stable comodule category will better serve our purposes. That is a category $\sC$ such that:
\begin{enumerate}
\item If $M$ is a $\cA\dual$-comodule that is free of finite rank over $\M_2$ and $N$ is a $\cA\dual$-comodule, then $\Hom_\sC(M,N)\iso\Ext_{\cA\dual}(M,N)$.
\item If $M$ is a $\cA\dual$-comodule that is free of finite rank over $\M_2$, then $M$ is compact in $\sC$. That is to say, for any sequential colimit in $\sC$ of $\cA\dual$-comodules
\[\underset{i}{\colim} N_i:=\colim (N_0\xrightarrow{f_0}N_1\to \cdots\to N_i\xrightarrow{f_i}\cdots),\] we have $\underset{i}{\colim}\Ext_{\cA\dual}(M,N_i)\iso\Hom_\sC(M,\underset{i}{\colim} N_i)$
\end{enumerate}
The correct choice of $\sC$ is called $Stab(\cA\dual)$. The category can be constructed in various ways (see \cite[Sec. 2.1]{Bel} for details), and has several useful properties for our case. The following proposition summarizes some of the discussion in \cite[Sec. 4]{BHV}:
\begin{prop}
\label{smc}
The category $Stab(\cA\dual)$ satisfies conditions $(1)$ and $(2)$ above.
\end{prop}
Namely, for a Hopf algebra $\Gamma$ and comodule $M$ that is free of finite rank, we have a diagram
\[\xymatrix{
D(\Gamma)\ar@/_1pc/[dr]_-{\Hom_{D(\Gamma)}(iM,-)}& Comod_\Gamma\ar[l]_i\ar@{.>}[r]^j\ar[d]_{\Ext_\Gamma(M,-)}& Stab(\Gamma)\ar@/^1pc/[dl]^-{\Hom_{Stab(\Gamma)}(jM,-)}\\
&\mathbf{grAb}& \\
}
\]
where $i$ is the canonical functor and $j$ is well-defined only for comodules that are free of finite rank over $\M_2$. This diagram commutes. Because the stable comodule category cooperates nicely with taking colimits, we can compute the colimit of a sequence of $\Ext_\Gamma(M,N)$.\\
Here we introduce notation that will be used in future sections.
\begin{notn}\label{notation}
For a spectrum $M$ such that $H_*(M)$ is free of finite rank over $\M_2$, let $M$ also denote the embedded image of the homology of the spectrum $M$ in the stable comodule category (i.e., $M=j(H_*(M))$). We use $[M,N]_{*,*,*}^\Gamma$ to denote $\Hom_{Stab(\Gamma)}(M,N)$, where $M$, $N\in Stab(\Gamma)$. For example, if $M=S^0$, then $H_*(S^0)=\M_2$, which we also denote by $S$. Thus $\Ext_{\cA\dual}^{s,f,w}(\M_2,\M_2) = [S,S]^{\cA\dual}_{s,f,w}$.
When $\Gamma$ is the motivic dual Steenrod algebra, we omit the superscript $\Gamma$. This notation is consistent with \cite{Kra}.\\
We use the grading $(s,f,w)$, where $s$ is the stem, $f$ is the Adams filtration and $w$ is the motivic weight. Notice that $t=s+f$ is the internal degree. Given a self-map $\theta$: $\Sigma^{s_0,f_0,w_0}M\xrightarrow{\theta} M$ in $Stab(\cA\dual)$, we have a cofiber sequence $\Sigma^{s_0,f_0,w_0}M\xrightarrow{\theta} M\to M/\theta$ in $Stab(\cA\dual)$. The associated long exact sequence will be indexed as follows:
\[
\cdots\to [M,N]_{s+s_0+1,f+f_0-1,w+w_0}\to [M/\theta,N]_{s,f,w}\to [M,N]_{s,f,w}\to [M,N]_{s+s_0,f+f_0,w+w_0}\to \cdots
\]
Sometimes we omit indices when there is no risk of confusion.
\end{notn}
\section{Self-maps and Massey products}
In this section, we show that the cofiber $S/h_0^k$ admits a self-map and identify it with the Massey product in Theorem~\ref{mpt}. Self-maps are maps of suspensions of an object to itself. For a dualizable object $Y$, self maps $\Sigma^nY\to Y$ can also be described as elements of $\pi_*(Y\otimes DY)$, with $DY$ the $\otimes$-dual of $Y$. In this paper we mainly deal with homological self-maps in $Stab(\cA\dual)$.\\
When considering the vanishing region and the periodicity region, we only work with the $h_0$-torsion part. (Of course, this is not much of a loss: as classically, the only $h_0$-local elements are in the 0-stem.) We next investigate the $h_1$-torsion part inside the $h_0$-torsion. For this purpose, we introduce the following notion.
\begin{defn}\label{F0}
Let $F_0$ be the fiber of $S\to S[h_0\inv]$, where $S[h_0\inv]:=\colim(S^0\xrightarrow{h_0}S\xrightarrow{h_0}\cdots)$ in $Stab(\cA\dual)$. Similarly, let $F_{01}$ be the fiber of $F_0\to F_0[h_1\inv]$ with $F_0[h_1\inv]$ defined as an analogous colimit.
\end{defn}
The group $[S,F_{01}]$ contains the subset of $[S,S]$ consisting of elements that are both $h_0$- and $h_1$-torsion, as well as the negative parts of those $h_0$ and $h_1$-towers in $F_0[h_1\inv]$. The regions we are considering are unaffected. We display the corresponding $\Ext$ groups in Figure \ref{sf0} and \ref{sf01}.
\\
\begin{figure}
\caption{$[S,F_0]^{\cA^\vee}
\label{sf0}
\caption{$[S,F_{01}
\label{sf01}
\end{figure}
The periodicity operator $P$ corresponds to multiplying by the element $h_{20}^4$ of the May spectral sequence, meaning that for many values of $x$, $h_{20}^4x\in \langle h_3,h_0^4,x\rangle$. However, $h_{20}^4$ does not survive to $\Ext$. As a result, multiplying by $P$ is not a map from $[S,S]$ to $[S,S]$. Luckily, \cite[Figure 2]{GI1} shows that $P$ survives in $[S/h_0,S]$. Similarly, we have the following proposition:
\begin{prop}\label{exist}
The element $h_{20}^{2^r}$ survives the May spectral sequence to $[S/h_0^k,S]$ for $k\leq 2^r$, and thus gives a corresponding element $P^{2^{r-2}}$ in $[S/h_0^k,S/h_0^k]$, i.e. a self-map of $S/h_0^k$.
\end{prop}
If $N$ is an $\cA\dual$-comodule in $Stab(\cA\dual)$, then $[S/h_0^k,S/h_0^k]$ acts on $[S/h_0^k,N]$. The corresponding element $P$ (or some power of $P$) inside $[S/h_0^k,S/h_0^k]$ induces a map from $[S/h_0^k,N]$ to itself. We would like to show that for any $k\leq 2^r$ and $r\geq 2$, multiplying by $P^{2^{r-2}}$ on $[S/h_0^k, S]$ coincides with the Massey product $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$ in a certain region. In other words, we must show that there is zero indeterminacy.
The Massey product is defined on the kernel
of $h_0^{2^r}$ on $[S,S]$, which we will denote $ker(h_0^{2^r})$. It lands in the cokernel of multiplication by $h_{r+1}$:
\[P_r(-): ker(h_0^{2^r})\to [S, S]/h_{r+1.}\]
\begin{rmk}
Originally one would like to consider the following square and see that it commutes in a certain region
\[\xymatrix{[S/h_0^k,S]\ar[r]^{-\cdot P^{2^{r-2}}}\ar[d]&[S/h_0^k,S]\ar[d]\\
ker(h_0^{2^r})\ar[r]_-{P_r(-)}&[S, S]/h_{r+1}.}
\]
The vertical maps are induced by $S\to S/h_0^k$. However, since we lost the advantage of a vanishing region of $f>\frac{1}{2}s+\frac{3}{2}$ that we need in the classical setting, the region where the vertical maps are isomorphisms is not satisfactory. We solve this problem by restricting attention to the $h_0$ and $h_1$-torsion.
\end{rmk}
To better fit our purposes, consider the Massey product defined on $[S,F_{01}]$\[P_r(-): ker_{F_{01}}(h_0^{2^r})\to [S, F_{01}]/h_{r+1.}\]
This gives the following squares, over which we have more control:
\begin{equation}
\xymatrix{[S/h_0^k,F_{01}]\ar[r]^{-\cdot P^{2^{r-2}}}\ar[d]&[S/h_0^k,F_{01}]\ar[d]\\
ker_{F_{01}}(h_0^{2^r})\ar[r]_-{P_r(-)}\ar[d]&[S, F_{01}]/h_{r+1}\ar[d]\\
ker_S(h_0^{2^r})\ar[r]_-{P_r(-)}&[S, S]/h_{r+1}}
\label{topsquare}
\end{equation}
The canonical map $F_{01}\to S$ induces a map $[S,F_{01}]\to[S,S]$ given by inclusion on the $h_0-$ and $h_1$-torsion elements and which sends negative towers to zero. The bottom square commutes for $s>0$ and $f>0$ modulo potential indeterminacy. We would like to show that the indeterminacy vanishes under some conditions.
Let $C(\eta)$ denote the cofiber of the first Hopf map \[S^{1,1}\xrightarrow{\eta}S^{0,0}.\] Writing $C_\eta$ for the cohomology $H^{*,*}(C(\eta))$, we have the following result:
\begin{thm}\cite[Theorem 1.1]{GI1}\label{vceta}
The group $\Ext_\cA^{s,f,w}(\M_2,C_\eta)$ vanishes when $s>0$ and $f>\frac{1}{2}s+\frac{3}{2}$.
\end{thm}
Theorem \ref{vceta} gives us that $[S, C_\eta]_{s,f,w}$ vanishes when $s>0$ and $f>\frac{1}{2}s+\frac{3}{2}$. In other words, there are only $h_1$-towers when $s>0$ and $f>\frac{1}{2}s+\frac{3}{2}$ in $[S,S]_{s,f,w}$. Moreover, we have the following fact:
\begin{prop}[Corollary of {\cite[Theorem 1.1]{GI2}}]
For $r\geq 1$, $h_{r+1}$ does not support an $h_1$-tower.
\end{prop}
Therefore the indeterminacy $(h_{r+1}[S,S])_{s,f,w}$ must vanish when $f>\frac{1}{2}s+3-2^r$, under the following two conditions: that $h_{r+1}$ has $s=2^{r+1}-1$, and that there are only $h_1$-towers in $[S,S]_{s,f,w}$ when $s>0$ and $f>\frac{1}{2}s+\frac{3}{2}$, which are $h_{r+1}$-torsion groups.
\begin{rmk}\label{ind}
It is easy to see that the indeterminacy $(h_{r+1}[S,F_{01}])_{s,f,w}$ also vanishes when $f>\frac{1}{2}s+3-2^r$.
\end{rmk}
The first row of the top square in \eqref{topsquare} is multiplication by some power of the element $P$. We next determine when the vertical maps are isomorphisms.
\begin{lemma}[Motivic version of {\cite[Lemma 5.2]{Kra}}]\label{lemma 5.2}
Let $M,N\in Stab(\cA\dual)$.
Assume that $[M,N]$ vanishes when $f>as+bw+c$ for some $a,b,c\in \R$, let $\theta:\Sigma^{s_0,f_0,w_0}M\to M$ be a map with $f_0>as_0+bw_0$, and let $M/\theta$ denote the cofiber of $\Sigma^{s_0,f_0,w_0}M\xrightarrow{\theta} M$.
Then \[[M/\theta,N]\to [M,N]\]
is an isomorphism above a vanishing plane parallel with the one in $[M,N]$ but with $f$-intercept given by $c-(f_0-as_0-bw_0)$.
\end{lemma}
\begin{pf}
The result follows from the long exact sequence associated to the cofiber sequence $\Sigma^{s_0,f_0,w_0}M\xrightarrow{\theta} M\to M/\theta$:
\[
\cdots\to [M,N]_{s+s_0+1,f+f_0-1,w+w_0}\to [M/\theta,N]_{s,f,w}\to [M,N]_{s,f,w}\to [M,N]_{s+s_0,f+f_0,w+w_0}\to \cdots
\]
\end{pf}
\begin{rmk}
This approach could also apply to a vanishing region above several planes or even a surface. The vanishing condition of Lemma \ref{lemma 5.2} could be rephrased as the following:
Assume that $[M,N]_{*,*,*}$ vanishes when $f>\varphi(s,w)$ where $\varphi:\R^2\to\R$ is a smooth function. Then the gradient $v(-,-)=(\frac{\bdry\varphi}{\bdry s}(-),\frac{\bdry\varphi}{\bdry w}(-))$ is a vector field. Let $d=\underset{(s_0,w_0)}{max}|v(s_0,w_0)|$, and assume both $\frac{f_0}{s_0}$ and $\frac{f_0}{w_0}$ are greater than $d$. The remaining proof would follow similarly, with the $f$-intercept given by $max \{c-(f_0-ds_0),c-(f_0-dw_0)\}$.
\end{rmk}
We have this as a corollary:
\begin{cor}[Motivic version of {\cite[Lemma 5.9]{Kra}}]\label{ud}
Let $k\geq 1$. For $f>\frac{1}{2}s+\frac{3}{2}-k$, the natural map $[S/h_0^k,F_{01}]_{s,f,w}\to [S,F_{01}]_{s,f,w}$ is an isomorphism.
\end{cor}
\begin{pf}
To determine this, we need to confirm that $[S,F_{01}]_{s,f,w}$ admits a vanishing region of $f>\frac{1}{2}s+\frac{3}{2}$. The fiber sequence $F_{01}\to F_0\hookrightarrow F_0[h_1\inv]$ gives us an exact sequence: \[\cdots\to [S,F_{01}]_{s,f,w}\to [S, F_0]_{s,f,w}\xhookrightarrow{h_1\inv}[S,F_0[h_1\inv]]_{s,f,w}\to[S, \Sigma^{1,-1,0}F_{01}]_{s,f,w}\to \cdots\]
Since $[S,F_0]$ differs from $[S,S]$ only in the first column, there are only $h_1$-towers when $f>\frac{1}{2}s+\frac{3}{2}$. And by Theorem \ref{vceta} again, $[S,C_\eta]_{s,f,w}$ vanishes when $s>0$ and $f>\frac{1}{2}s+\frac{3}{2}$. In other words, above the plane $f=\frac{1}{2}s+\frac{3}{2}$, multiplying by $h_1$, which detects $\eta$, is an isomorphism from $[S, F_0]_{s,f,w}$ to $[S,F_0]_{s+1,f+1,w+1}$.
As a result, inverting $h_1$ would be an isomorphism from $[S,F_0]_{s,f,w}$ to $[S,F_0[h_1\inv]]_{s,f,w}$ when $f>\frac{1}{2}s+\frac{3}{2}$. Therefore, $[S,F_{01}]_{s,f,w}$ vanishes when $f>\frac{1}{2}s+\frac{3}{2}$. Applying Lemma \ref{lemma 5.2} gives the corollary.
\end{pf}
The results in \ref{exist} and \ref{ind} locate the region where both squares commute, thus obtaining the first part of Theorem \ref{mpt}.
\begin{thm}[Motivic version of {\cite[Proposition 5.12]{Kra}}]\label{selfmap}
For $k\leq 2^r$ and $r\geq 2$, the cofiber $S/h_0^k$ admits a self-map $P^{2^{r-2}}$ of degree $(2^{r+1}, 2^r, 2^r)$. Thus, for any $N\in Stab(\cA\dual)$, composition with $P^{2^{r-2}}$ defines a self-map on $[S/h_0^k,N]$.
When $f>\frac{1}{2}s+3-k$, the induced map coincides with the Massey product $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$ with zero indeterminacy.
\end{thm}
\section{Colimits and the Cartan-Eilenberg spectral sequence}
We will obtain a vanishing region for $[S/(h_0,P),F_{01}]_{*,*,*}$ in this section. Consider the colimit \[F_0/h_1^\infty:=\underset{i}{\colim}(\Sigma^{-1,-1,-1}F_0/h_1\xrightarrow{h_1}\cdots\xrightarrow{h_1}\Sigma^{-i,-i,-i}F_0/h_1^i\xrightarrow{h_1}\cdots)\] in $Stab(\cA\dual)$. As we show in the following result, it differs from $F_{01}$ by a suspension in the region we are considering.
\begin{prop}\label{shift}
When $f>\frac{1}{2}s+\frac{3}{2}$,
\[[S,\Sigma^{-1,1,0}F_0/h_1^\infty]_{s,f,w}\iso [S,F_{01}]_{s,f,w}\]
\end{prop}
\begin{pf}
To see this, note that the colimit $F_0/h_1^\infty$ is a union of all the $h_1$-torsion in $F_0$, while the fiber $F_{01}$ detects the $h_1$-torsion together with those negative $h_1$-towers.
\end{pf}
Note that $F_0$ coincides with \[\Sigma^{-1,1,0}S/h_0^\infty:=\Sigma^{-1,1,0} \underset{i}{\colim}(\Sigma^{0,1,0}S/h_0\xrightarrow{h_0}\cdots\xrightarrow{h_0}\Sigma^{0,i,0}S/h_0^i\xrightarrow{h_0}\cdots),\] if we ignore the negative $h_0$-tower. That is, we have $[S,\Sigma^{-1,1,0} S/h_0^\infty]_{s,f,w}\iso [S,F_0]_{s,f,w}$ when $f>0$.
\begin{rmk}
We have shown that the map $[S/h_0^k,F_0/h_1^\infty]_{s,f,w}\to[S,F_0/h_1^\infty]_{s,f,w}$ is an isomorphism when $f>\frac{1}{2}s+3-k$. We consider this colimit because it is better for computational purposes (the fiber $F_{01}$ is harder to deal with than the colimit $F_0/h_1^\infty$).
\end{rmk}
Let $\theta$ be a self-map of $S/h_0^k$, and consider the cofiber sequence $S/h_0^k\xrightarrow{\theta}S/h_0^k\to S/(h_0^k,\theta)$. The vanishing region for $[S/(h_0^k,\theta),F_0/h_1^\infty]_{*,*,*}$ is the region where \[[S/h_0^k,F_0/h_1^\infty]_{s,f,w}\xrightarrow{\theta}[S/h_0^k,F_0/h_1^\infty]_{s+s_0,f+f_0,w+w_0}\] is an isomorphism. The goal of this section is to obtain a vanishing region for $[S/(h_0^k,\theta),F_0/h_1^\infty]_{*,*,*}$ in the case $k=1$ and $\theta=P$.\\
The dual Steenrod algebra is too large to work with, so we would like to start with a smaller one, namely $\cA(1)\dual\iso \M_2[\tau_0,\tau_1,\xi_1]/(\tau_0^2=\tau\xi_1,\tau_1^2,\xi_1^2)$. Then for $\cA\dual$-comodules $M$ and $N$ (thus also $\cA(1)\dual$-comodules), we can recover $[M,N]^{\cA\dual}$ from $[M,N]^{\cA(1)\dual}$ via infinitely many Cartan-Eilenberg spectral sequences along normal extensions of Hopf algebras.
A brief introduction to the Cartan-Eilenberg spectral sequence (see \cite[Ch.XV]{CE} for details) is relevant at this point. Given an extension of Hopf algebras over $\M_2$
\[E\to \Gamma \to C\]
(so in particular $E\iso \Gamma \square_C \M_2$),
the Cartan-Eilenberg spectral sequence computes $Cotor_\Gamma(M,N)$ for a $\Gamma$-comodule $M$ and an $E$-comodule $N$. The spectral sequence arises from the double complex ($\Gamma$-resolution of $M$)$\square_\Gamma$($E$-resolution of $N$), and we have $Cotor_\Gamma(M,N)\iso \Ext_\Gamma(M,N)$ when $M$ and $N$ are $\tau$-free.
The Cartan-Eilenberg spectral sequence has the form
\[E_1^{s,t,*,*}=Cotor_C^{t,*}(M,\Bar{E}^{\otimes s}\otimes N)\Rightarrow Cotor_\Gamma^{s+t,*}(M,N).\]
If $E$ has trivial $C$-coaction, then we have $E_1^{s,t,*,*}\iso Cotor_C^{t,*}(M,N)\otimes \Bar{E}^{\otimes s}$. Taking the cohomology we obtain the $E_2$-page: \[E_2^{s,t,*,*}=Cotor_E^{s,*}(\M_2,Cotor_C^{t,*}(M,N))\iso \Ext_E^{s,*}(\M_2,\M_2)\otimes \Ext_C^{t,*}(M,N).\]
Let $N=F_0/h_1^\infty$. We will compute $[S/h_0,F_0/h_1^\infty]^{\cA(1)\dual}$ as an intermediate step before reaching our goal of $[S/(h_0,P),F_0/h_1^\infty]^{\cA(1)\dual}$. As a starting point, we can compute $[S/h_0,F_0]$ over $\cA(1)\dual$, via the cofiber sequence $S\xrightarrow{h_0}S\to S/h_0$.
\begin{figure}
\caption{$[S/h_0,F_0]^{\cA(1)\dual}
\end{figure}
This is periodic,where the periodicity shifts degree by $(8,4,4)$. Since $[S/h_0,F_0/h_1^\infty]^{\cA(1)\dual}$ is a colimit, it is essential to know the maps over which we are taking the colimit. First let us take a look at the maps induced by multiplying by $h_1$ (we abbreviate $\Sigma^{-i,-i,-i}$ to $\Sigma^{-i}$ in this diagram):\\
\[
\scalebox{0.87}{
\xymatrix @R=2em {
\ar[r]^-{h_1}&[S/h_0,\Sigma^{-1}F_0]\ar[r] \ar[d]_{h_1\circ\Sigma^{-1}}& [S/h_0,\Sigma^{-1}F_0/h_1]\ar[r]\ar[d]&\Sigma^{2,0,1} [S/h_0,\Sigma^{-1}F_0]\ar[d]^{id}\ar[r]^-{h_1}& \\
\ar[r]^-{h_1^2}&[S/h_0,\Sigma^{-2}F_0]\ar[d]_{h_1\circ\Sigma^{-1}}\ar[r]&[S/h_0,\Sigma^{-2}F_0/h_1^2]\ar[d]\ar[r]&\Sigma^{3,1,2}[S/h_0,\Sigma^{-2}F_0]\ar[d]\ar[r]^-{h_1^2}&\\
& & & & \\
}
}
\]
\begin{equation}
\label{colim}
\end{equation}
All rows are exact. The colimit of the column on the left is $coker(h_1^k)$, while the colimit of the column on the right is $ker(h_1^k)$. As a result, taking the colimit in the middle would merely be taking the colimits of the cokernel part from the left and the kernel part from the right. There are hidden multiplicative relations between the cokernel and kernel. However, they do not affect the vanishing region, which is our only goal. Here is a more illuminating diagram:\\
\[\scalebox{0.94}{
\xymatrix{
0\ar[r]&coker(h_1)\ar[r] \ar[d]_{h_1\circ\Sigma^{-1}}& [S/h_0,\Sigma^{-1}F_0/h_1]\ar[r]\ar[d]&ker(h_1)\ar[d]^{i}\ar[r]&0\\
0\ar[r]&coker(h_1^2)\ar[d]_{h_1\circ\Sigma^{-1}}\ar[r]&[S/h_0,\Sigma^{-2}F_0/h_1^2]\ar[d]\ar[r]&ker(h_1^2)\ar[d]\ar[r]&0\\
& & & & \\
}}
\]
The maps $i$ on the right column are canonical inclusions, and passing to colimits gives \[\underset{k}{\colim}(coker(h_1^k))\to[S/h_0,F_0/h_1^\infty]\to\underset{k}{\colim}(ker(h_1^k)).\] Working over the dual subalgebra $\cA(1)\dual$ we can calculate $[S/h_0,\Sigma^{-1,1,0}F_0/h_1^\infty]^{\cA(1)\dual}_{*,*,*}$ directly. Furthermore we have:
\begin{prop}\label{inj}
For any $k\in \Z$, $k\geq 1$, the maps $[S/h_0,\Sigma^{-k}F_0/h_1^k]^{\cA(1)\dual}\to [S/h_0,\Sigma^{-k-1}F_0/h_1^{k+1}]^{\cA(1)\dual}$ are injective.
\end{prop}
The result of the calculation is shown in Figure \ref{period8}. The shift in the figure appears as result of Proposition \ref{shift}.
\begin{figure}
\caption{$[S/h_0,\Sigma^{-1,1,0}
\label{period8}
\end{figure}
This is periodic, with a periodicity degree shift of $(8,4,4)$, just as with $[S/h_0,F_0]^{\cA(1)\dual}$. Note that $[S/h_0,\Sigma^{-1,1,0}F_0/h_1^\infty]^{\cA(1)\dual}_{*,*,*}$ differs from the classical $[S/h_0,S]^{\cA_{cl}(1)\dual}_{*,*}$ with two extra negative $h_1$-towers associated to each "lighting flash". The element in degree $(-1,0,-1)$ in the first pattern is generated by $\tau$ with a shift.
Recall the self-map $P$ on $S/h_0$ acts injectively as can be seen in Figure \ref{period8}. Combining this with the long exact sequence:
\[
\xymatrix @R=1.5ex{\cdots\ar[r]& [S/(h_0,P),F_0/h_1^\infty]_{s,f,w}^{\cA(1)\dual}\ar[r]& [S/h_0,F_0/h_1^\infty]_{s,f,w}^{\cA(1)\dual}\ar[r]^-{P}& \\
\ar[r]^-{P} &[S/h_0, F_0/h_1^\infty]_{s+8,f+4,w+4}^{\cA(1)\dual}\ar[r] &[S/(h_0,P),F_0/h_1^\infty]_{s-1,f+1,w}^{\cA(1)\dual}\ar[r]& \cdots\\
}
\]
\noindent gives $[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^\infty]_{*,*,*}^{\cA(1)\dual}$ as in Figure \ref{figure2}.
\begin{rmk}\label{also inj}
Analogously to Proposition \ref{inj}, for any $k\in \Z$, $k\geq 1$, the following maps are also injective: \[[S/(h_0,P),\Sigma^{-k}F_0/h_1^k]^{\cA(1)\dual}\to [S/(h_0,P),\Sigma^{-k-1}F_0/h_1^{k+1}]^{\cA(1)\dual}.\]
\end{rmk}
\begin{figure}
\caption{$[S/(h_0,P),\Sigma^{-1,1,0}
\label{figure2}
\end{figure}
Next we will use the Cartan-Eilenberg spectral sequence to bootstrap our result from $\cA(1)\dual$-homology to $\cA\dual$-homology. The Cartan-Eilenberg spectral sequence converges when the input is a bounded-below $\cA\dual$-comodule. We will obtain a vanishing region for each finite stage $[S/(h_0,P),\Sigma^{-k}F_0/h_1^k]^{\cA\dual}$ and then deduce the vanishing region for $[S/(h_0,P),F_0/h_1^\infty]^{\cA\dual}$ by passing to the colimit.
\[
\scalebox{0.92}{
\xymatrix{
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA(1)\dual}\ar[r]\ar@{~>}[d]&[S/(h_0,P),\Sigma^{-2}F_0/h_1^2]^{\cA(1)\dual}\ar[r]\ar@{~>}[d]_{CESS}&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(1)\dual}\\
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA\dual}\ar[r]&[S/(h_0,P),\Sigma^{-2}F_0/h_1^2]^{\cA\dual}\ar[r]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA\dual}
}
}\\
\]
Going from $\cA(1)\dual$ to $\cA\dual$ is too big of a step, so we first calculate $[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual}$, where
\[\cA(2)\dual=\M_2[\tau_0,\tau_1,\tau_2,\xi_1,\xi_2]/(\tau_0^2=\tau\xi_1,\tau_1^2=\tau\xi_2,\tau_2^2,\xi_1^4,\xi_2^2).\]
To do this, we will use a sequence of normal maps of Hopf algebras:
\[\cA(2)\dual\to \cA(2)\dual/\xi_1^2\to \cA(2)\dual/(\xi_1^2,\xi_2)\to \cA(2)\dual/(\xi_1^2,\xi_2,\tau_2)=\cA(1)\dual.\]
First we consider the Cartan-Eilenberg spectral sequence corresponding to the extension $$E(\tau_2)\to \cA(2)\dual/(\xi_1^2,\xi_2)\to \cA(1)\dual.$$ The element $\tau_2$, which has degree $(6,1,3)$, corresponds to $h_{30}$ in the May spectral sequence. The $\cA(1)\dual$-coaction on $E(\tau_2)$ is trivial for degree reasons. So we start with the $E_1=E_2$-page, and deduce a vanishing region on $[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual/(\xi_1^2,\xi_2)}$.
\[
\xymatrix{
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA(1)\dual}\otimes\M_2[h_{30}]\ar[r]\ar@{=>}[d]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(1)\dual}\otimes\M_2[h_{30}]\\
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA(2)\dual/(\xi_1^2,\xi_2)}\ar[r]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual/(\xi_1^2,\xi_2)}
}
\]
For the normal extension $E(\beta)\to \Gamma \to C$ of Hopf algebras we state a motivic version of \cite[Lemma 4.10]{Kra}, which gives a relationship between the vanishing region for $[M,N]^\Gamma$ and the vanishing condition of $[M,N]^C$ together with the two "slopes" associated to $\beta$. Note that if $\beta$ has degree $(s_0,f_0,w_0)$, then $\frac{f_0}{s_0}$ and $\frac{f_0}{w_0}$ are the slopes of the projections of $(s_0,f_0,w_0)$ onto the plane $w=0$ and the plane $s=0$.
\begin{thm}\label{KraLem410}
Let $E(\alpha)\to \Gamma \xrightarrow{q} C$ be a normal extension of Hopf algebras and $M$,$N\in Stab(\Gamma)$. Suppose $\beta$ is an element in $[S,S]^E$ of degree $(s_0,f_0,w_0)$ with $s_0,f_0,w_0$ all positive. Its image in $[S,S]^\Gamma$ (which we also call $\beta$) acts on $[M,N]^\Gamma$. Suppose for some $a,b,c,m,c_0\in \R$ with $a,b>0$ and $m\geq\frac{f_0}{s_0}>0$, the group $[q_*(M),q_*(N)]^C$ vanishes when $f>as+bw+c$ and also vanishes when $f>ms+c_0$. Then
\begin{enumerate}
\item if $f_0\leq as_0+bw_0$, or $\beta$ acts nilpotently on $[M,N]^\Gamma$, then $[M,N]^\Gamma$ has a parallel vanishing region. In other words, it vanishes when $f>as+bw+c'$ for some constant $c'$ and also vanishes when $f>ms+c_0$.
\item otherwise, $[M,N]^\Gamma$ vanishes when $f>\frac{mbw_0-f_0(m-a)}{bw_0-s_0(m-a)}s+\frac{bf_0-mbs_0}{bw_0-s_0(m-a)}w+c'$ and vanishes when $f>ms+c_0$.
\end{enumerate}
\end{thm}
\begin{rmk}
The additional vanishing plane $f>ms+c_0$ generalizes the bounded below condition. In the classical setting, we have that $[M,N]^\Gamma$ vanishes when $s<c_0$, but due to the negative $h_1$-towers we do not have a vertical vanishing plane. So we adjust the "$\infty$-slope" plane to be $f=ms+c_0$ to fulfill our purpose. This bound does not affect the periodicity region we study here, so we omit it henceforth.
\end{rmk}
\begin{pf}[Proof of Theorem~\ref{KraLem410}]
If $\beta$ has $f_0\leq as_0+bw_0$, then $\beta$ multiples of classes in $[M,N]^C$ will lie under the existing vanishing planes.
If $f_0> as_0+bw_0$, then every infinite $\beta$ tower will contain classes lying outside of the rigion $f>as+bw+c$. If $\beta$ acts nilpotently, however, then there is a micimum length for all $\beta$-towers, and so we can still get a parallel vanishing plane $f>as+bw+c'$ on $[M,N]^\Gamma$ by adjusting the $f$-intercept.
Now we turn to case $(2)$. If $f_0> as_0+bw_0$ and $\beta$ acts non-nilpotently, then there must exist an element $x\in [M,N]^\Gamma$ for which the classes $\beta^kx$ are not zero on the $E_\infty$ page of the Cartan-Eilenberg spectral sequence for every $k$. Thus no matter how we move up the existing vanishing plane $f>as+bw+c$, some $\beta$ multiples of $x$ will lie above the plane. Instead, we will find a new vanishing plane $f>a's+b'w+c'$ for $a',b',c'\in \R$. The new vanishing region $f>a's+b'w+c'$ must satisfy the condition $f_0\leq a's_0+b'w_0+c'$. This plane is spanned by the direction of $\beta$ and the intersecting line of the two existing vanishing planes. Hence we can solve to obtain $a'=\frac{mbw_0-f_0(m-a)}{bw_0-s_0(m-a)}$ and $b'=\frac{bf_0-mbs_0}{bw_0-s_0(m-a)}$.
\end{pf}
\begin{rmk}\label{handy}
In the relevant cases, the starting vanishing regions will have $b=0$. One can think of these as 2-dimensional cases stated in 3-dimensional language.\\ We rewrite the conditions and the results of Theorem \ref{KraLem410} as the following:
Suppose for some $a,c,m,c_0\in \R$ with $a>0$ and $m\geq\frac{f_0}{s_0}>0$, the group $[q_*(M),q_*(N)]^C$ vanishes when $f>as+c$ and also vanishes when $f>ms+c_0$. Then:
\begin{enumerate}
\item if $f_0\leq as_0$, or $\beta$ acts nilpotently on $[M,N]^\Gamma$, then $[M,N]^\Gamma$ has a parallel vanishing region. That is to say, it vanishes when $f>as+c'$ for some constant $c'$, and also vanishes when $f>ms+c_0$,
\item if otherwise, then $[M,N]^\Gamma$ vanishes when $f>\frac{f_0}{s_0}s+c'$ for some constant $c'$, and vanishes when $f>ms+c_0$.
\end{enumerate}
\end{rmk}
\begin{rmk}
Similarly, we could generalize to the statement that $[q_*(M),q_*(N)]^C$ vanishes when $f>\varphi(s,w)$ where $\varphi:\R^2\to\R$ is a smooth function. Then the gradient $v(-,-)=(\frac{\bdry\varphi}{\bdry s}(-),\frac{\bdry\varphi}{\bdry w}(-))$ is a vector field. Now we would like to consider $g=\underset{(s_0,w_0)}{Min}|v(s_0,w_0)|$ and compare $g$ with $\frac{f_0}{s_0}$ and $\frac{f_0}{w_0}$. The conditions can be rewritten as follows:
\begin{enumerate}
\item if $\frac{f_0}{s_0}\leq g$ or $\frac{f_0}{w_0}\leq g$, or $\beta$ acts nilpotently, then $[M,N]^\Gamma$ has a parallel vanishing region.
\item if both $\frac{f_0}{s_0}$ and $\frac{f_0}{w_0}>g$, and $\beta$ acts non-nilpotently, then we must modify the vanishing region of $[M,N]^\Gamma$. However, it takes some work to write down a precise modification, so we omit it here.
\end{enumerate}
\end{rmk}
\begin{rmk}
From the cofiber sequence $S\xrightarrow{h_0^k} S\to S/h_0^k$ we can take tensor duals to derive the fiber sequence $D(S/h_0^k)\to S\to S$. Since $D(S/h_0^k) \simeq \Sigma^{-1,1-k,0}S/h_0^k$, we have
\[[S/h_0^k, S]_{s,f,w}=[S, D(S/h_0^k)]_{s,f,w}=[S, S/h_0^k]_{s+1,f+k-1,w.}\]
Because $S/h_0^k$ is compact in $Stab(\cA\dual)$, smashing the second slot with some $N\in Stab(\cA\dual)$, we get \[[S/h_0^k,N]_{s,f,w}\iso[S,D(S/h_0^k)\smsh N]_{s,f,w}\iso [S,S/h_0^k\smsh N]_{s+1,f+k-1,w.}\]
As a result $\beta\in [S,S]^\Gamma$ acts on $[M,N]^\Gamma$ for compact $M\in Stab(\cA\dual)$, since $\beta$ acts on $[S,DM\smsh N]^\Gamma$.
\end{rmk}
The group $[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^\infty]_{*,*,*}^{\cA(1)\dual}$ has a single "lighting flash" pattern along with two negative $h_1$-towers (see Figure
\ref{figure2}), so the vanishing region to start off with is $f>c$ (We obtain the same vanishing region of $[S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]_{*,*,*}^{\cA(1)\dual}$ for each $k$, since the maps we are taking colimit over are injections by Remark \ref{also inj}.) In our case, $[M,N]^\Gamma=[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^\infty]_{*,*,*}^{\cA(1)\dual}$, and we will apply Theorem \ref{KraLem410} in the following three cases: (i) $\beta$ is $\tau_2$ of degree $(6,1,3)$; (ii) $\beta$ is $\xi_2$ of degree $(5,1,3)$; (iii) $\beta$ is $\xi_1^2$ of degree $(3,1,2)$.
Recall that we are working with the Cartan-Eilenberg spectral sequence \[[S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]^{\cA(1)\dual}\otimes\M_2[h_{30}]\Rightarrow [S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]^{\cA(2)\dual/(\xi_1^2,\xi_2)}.\]
There cannot be any differentials for degree reasons. By Theorem \ref{KraLem410} the element $h_{30}$ will bring us a vanishing region $f>\frac{1}{6}s+c_1$ for each $k$, where $c_1$ is some constant (we obtain the same constant for all $k$). Passing to the colimit, we conclude that $[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^
\infty]^{\cA(2)\dual/(\xi_1^2,\xi_2)}$ shares the same vanishing region $f>\frac{1}{6}s+c_1$.\\
The second step is to consider the normal extension in which we add $\xi_2$, corresponding to the class $h_{21}$:
\[E(\xi_2)\to \cA(2)\dual/\xi_1^2\to \cA(2)\dual/(\xi_1^2,\xi_2).\]
The $\cA(2)\dual/(\xi_1^2,\xi_2)$-coaction on $E(\xi_2)$ is trivial. We have $E_2$-pages as the first row:
\[\scalebox{0.92}{
\xymatrix{
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA(2)\dual/(\xi_1^2,\xi_2)}\otimes\M_2[h_{21}]\ar[r]\ar@{=>}[d]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual/(\xi_1^2,\xi_2)}\otimes\M_2[h_{21}]\\
[S/(h_0,P),\Sigma^{-1}F_0/h_1]^{\cA(2)\dual/\xi_1^2}\ar[r]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual/\xi_1^2}
}}
\]
The spectral sequence collapses at the $E_2$-page. This is because in the May spectral sequence over $\cA(2)$ or $\cA$, there is a differential $d_1(h_{30})=h_1h_{21}+h_2h_{20}$, but $h_0$ and $h_2$ are zero in the group $[S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]^{\cA(2)\dual/(\xi_1^2,\xi_2)}$. As a result, $h_{21}$ is also non-nilpotent. For some constant $c_2$, the vanishing region of $[S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]^{\cA(2)\dual/\xi_1^2}$ is $f>\frac{1}{5}s+c_2$ for each $k$ according to Theorem \ref{KraLem410}, and the same is true for the colimit $[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^\infty]^{\cA(2)\dual/\xi_1^2}$.\\
Next we consider the Cartan-Eilenberg spectral sequence corresponding to the extension: \[E(\xi_1^2)\to \cA(2)\dual\to \cA(2)\dual/\xi_{1.}^2\]
Here the class $\xi_1^2$ corresponds to the class $h_2$ in the May spectral sequence. The $\cA(2)\dual/\xi_1^2$-coaction on $E(\xi_1^2)$ is trivial as well. We have $E_2$-pages as in the first row:
\[\xymatrix{
[S/(h_0,P),\Sigma\inv F_0/h_1]^{\cA(2)\dual/\xi_1^2}\otimes\M_2[h_2]\ar[r]\ar@{=>}[d]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual/\xi_1^2}\otimes\M_2[h_2]\\
[S/(h_0,P),\Sigma\inv F_0/h_1]^{\cA(2)\dual}\ar[r]&\cdots\ar[r]&[S/(h_0,P),F_0/h_1^\infty]^{\cA(2)\dual}
}
\]
We do have some non-zero differentials appear. In the previous steps, by introducing $[\tau_2]=(6,1,3)$ and $[\xi_2]=(5,1,3)$, which give rise to non-nilpotent elements in $\Ext$, we arrived a vanishing region of $f>\frac{1}{5}s+c_3$, where $c_3$ is a constant. However $[\xi_1^2]=(3,1,2)$ is nilpotent since $h_2^4=0$ in $\Ext_{\cA(2)\dual}$ and $\Ext$.
Moving from $\cA(2)\dual$ to $\cA\dual$, we have many more elements to introduce. However those elements won't satisfy $\frac{f}{s}>\frac{1}{5}$. By Theorem \ref{KraLem410} (or Remark~\ref{handy}), for each $k$, $[S/(h_0,P),\Sigma^{-1,1,0}(\Sigma^{-k}F_0/h_1^k)]^\cA$ vanishes if $f=\frac{1}{5}s+c_3$. Since the vanishing plane passes through the point $(-6,0,-1)+3\cdot(3,1,2)=(3,3,5)$, the constant $c_3$ is $\frac{12}{5}$ and the region $f>\frac{1}{5}s+\frac{12}{5}$ would be carried through to $\cA\dual$. We conclude that
\begin{prop}\label{vrk=1}
The group $[S/(h_0,P),\Sigma^{-1,1,0}F_0/h_1^\infty]_{s,f,w}$ has a vanishing region of $f>\frac{1}{5}s+\frac{12}{5}$.
\end{prop}
Note that it is possible for many reasons that the vanishing region we have found is not optimal. First, we could consider the "slope" of the motivic weight side $\frac{f}{w}$ instead of $\frac{f}{s}$ under certain bounded below conditions. Second, if other elements were included, more differentials would occur, allowing for a larger vanishing region. More calculation is required to clarify these cases.
\section{The motivic periodicity theorem}
Let $F_0$ and $F_{01}$ still be the same as in Definition \ref{F0}, so that $[S,\Sigma^{-1,1,0}F_0/h_1^\infty]_{s,f,w}\iso [S,F_{01}]_{s,f,w}$ when $f>\frac{1}{2}s+3$. Given a self-map $\theta$ on $S/h_0^k$ let us recall the diagram where the first row is exact:\\
\[
\scalebox{0.78}{
\xymatrix{
[S/(h_0^k,\theta),\Sigma^{-1,1,0}F_0/h_1^\infty]\ar[r]& [S/h_0^k,\Sigma^{-1,1,0}F_0/h_1^\infty]\ar[r]^-{\theta} \ar[d] & [S/h_0^k,\Sigma^{-1,1,0}F_0/h_1^\infty]\ar[d]\ar[r]& \Sigma^{-1,1,0}[S/(h_0^k,\theta),\Sigma^{-1,1,0}F_0/h_1^\infty] \\
& [S,\Sigma^{-1,1,0}F_0/h_1^\infty]\ar[r]^-{P_r(-)} & [S,\Sigma^{-1,1,0}F_0/h_1^\infty] &
}
}
\]\\
The vertical maps are isomorphisms whenever $f>\frac{1}{2}s+\frac{3}{2}-k$ due to Corollary \ref{ud}. We would like to further restrict the condition to $f>\frac{1}{2}s+3-k$ in order to eliminate the indeterminacy. The vanishing condition on $[S/(h_0^k,\theta),\Sigma^{-1,1,0}F_0/h_1^\infty]$, which is the same as the vanishing condition on $[S/(h_0^k,\theta),F_{01}]_{s,f,w}$, tells us whether $\theta$ is an isomorphism.\\
In the previous section, we established the case when $k=1$, given in Proposition \ref{vrk=1}. We show in Figure \ref{periodmore} the $(2^{r+1},2^r,2^r)$-periodic pattern for $[S/h_0^k,\Sigma^{-1,1,0}F_0/h_1^\infty]^{\cA(1)\dual}$, where $k\leq 2^r$. By an analogous computation, one can see that for a general positive integer $k\leq 2^r$, the groups $[S/(h_0^k,P^{2^{r-2}}),F_{01}]_{s,f,w}$ admit a parallel vanishing region as in the $k=1$ case.
\begin{figure}
\caption{$[S/h_0^k,\Sigma^{-1,1,0}
\label{periodmore}
\end{figure}
We have the following lemma for the $f$-intercept:
\begin{lemma}[Corollary of {\cite[Lemma 5.4]{Kra}}]\label{Kra5.4}
Let $M,N\in Stable(\cA\dual)$ with $M$ compact. Let $M_1=M/\theta_1$ be the cofiber of the self-maps $\Sigma^{s_1,f_1,w_1}M\xrightarrow{\theta_1}M$, and let $M_2=M/(\theta_1,\theta_2)$ be the cofiber of the self-maps $\Sigma^{s_2,f_2,w_2}M/\theta_1\xrightarrow{\theta_2}M/\theta_1$. Define $M_1'$ and $M_2'$ with respect to the self-maps $\Sigma^{s_1',f_1',w_1'}M\xrightarrow{\theta_1'}M$ and $\Sigma^{s_2',f_2',w_2'}M/\theta_1'\xrightarrow{\theta_2'}M/\theta_1'$ in the same way. Suppose $\theta_i$ and $\theta_i'$ are parallel, i.e. $(s_i,f_i,w_i)=\lambda_i(s_i',f_i',w_i')$ where $\lambda_i$ are non-zero real numbers and $i=1,2$.
Further let $a,b\in \R$ and suppose $f_i>as_i+bw_i$ and $f_i'>as_i'+bw_i'$ for $i=1,2$. We make the convention that the $f$-intercept is $\infty$ if there is no such vanishing plane. Then the minimal $f$-intercepts of the vanishing planes parallel to $f=as+bw$ on $[M_2,N]$ and $[M_2',N]$ agree.
\end{lemma}
\begin{pf}[Proof of Lemma \ref{Kra5.4}]
We construct the iterated cofiber $L_1=M/(\theta_1,\theta_1')$ and $L_2=M/(\theta_1,\theta_2,\theta_1',\theta_2')$. Since $f_i>as_i+bw_i$ and $f_i'>as_i'+bw_i'$ for $i=1,2$, the minimal $f$-intercepts for the vanishing planes parellel to $f=as+bw$ agree on $[M_i,N]$, $[M_i',N]$ and $[L_i,N]$ by inductively applying Lemma \ref{lemma 5.2}.
Note that the notation for $L_1$ and $L_2$ is ambiguous. The notation does not indicate that $M/\theta_1$ should admit a $\theta_1'$ self-map or vice versa. Because of the uniqueness of (homological) self-maps that Krause has shown in \cite[Sec. 4]{Kra}, there is a self-map $\theta_1''$ compatible with both $\theta_1$ and $\theta_1'$, which acts on $M$ by a power of $\theta_1$, and by a power of $\theta_2$. We will take $L_1$ to be the cofiber of the self-map $\theta_1''$. Similarly, there exists a self-map $\theta_2''$ on $L_1$ that acts on $M_1$ by a power of $\theta_2$, and on $M_1'$ by a power of $\theta_2'$. So we can set $L_2$ as the cofiber of the self-map $\theta_2''$.
\end{pf}
\begin{rmk}
Krause's proof of the uniqueness of self-maps is in the classical setting, yet for the $\C$-motivic case the proof is analogous.
\end{rmk}
\begin{rmk}
The cofiber sequences arising from the Verdier's axiom and the $3\times 3$ lemma offer an alternative way to issue the vanishing condition of $[S/(h_0^k,P^{2^{r-2}}),F_{01}]_{s,f,w}$. Let $m,n,l,l'\in \N$ be positive with $m\leq 4l$ and $m+n\leq 4(l+l')$. We have the following cofiber sequences:
\[S/h_0^m\to S/h_0^{m+n}\to S/h_0^n\]
\[S/(h_0^m,P^{l+l'})\to S/(h_0^{m+n},P^{l+l'})\to S/(h_0^n,P^{l+l'})\]
\[S/(h_0^m,P^l)\to S/(h_0^m,P^{l+l'})\to S/(h_0^m,P^{l'}).\]
Passing to the induced long exact sequences in homology, we conclude that for $k\leq 2^r$, the groups $[S/(h_0^k,P^{2^{r-2}}),F_{01}]_{s,f,w}$ admit the same vanishing condition as $[S/(h_0,P),F_{01}]_{s,f,w}$.
\end{rmk}
It follows that for any $k\leq 2^r$ and any self-map $\theta=P^{2^{r-2}}$ of $S/h_0^k$, the corresponding groups $[S/(h_0^k,\theta), F_{01}]$ have a vanishing region of $f>\frac{1}{5}s+\frac{12}{5}$. Combining with Theorem~\ref{selfmap}, we arrive at the motivic version of Theorem~\ref{cpt}:
\begin{thm}[Another way of stating Theorem~\ref{mpt}]
For $r\geq 2$, the Massey product operation $P_r(-):=\langle h_{r+1}, h_0^{2^r},-\rangle$ is uniquely defined on $\Ext^{s,f,w}=H^{s,f,w}(\cA)$ when $s>0$ and $f>\frac{1}{2}s+3-2^r$.
Furthermore, for $f>\frac{1}{5}s+\frac{12}{5}$,
\[P_r\colon [S,F_{01}]_{s,f,w}\xrightarrow{P_r(-)}[S,F_{01}]_{s+2^{r+1},f+2^r,w+2^r.}\]
is an isomorphism between $h_0$ and $h_1$-torsions.
\end{thm}
\end{document} | math |
یتھ کَین گو پرنس گو وآریا زیادہ نُکسآن یتھ کینٹھ تِ مگر ون چُھ پکناوُن دُنیا چُھ تِ پکناوُن چُھ ایس چھِ پانِ پُورکۆشش کران ، ایس ھیکۆ آرام سآن پَرِتھ ورِتھ، آنلاءین چُھ مشکل گسآن مگر ایس چھِ پۆرِ کۆشش کرآن تِ بیی کیا ھیکۆ ونیتھ | kashmiri |
If you have a pet then I am pretty sure you have said to yourself "what are we going to do with Bentley while we are away?" Or I have to work late and I can't leave him in the crate that long? Pet sitters are a great alternative to boarding.
If your answer is yes to any of the following questions then a pet sitter is your answer!
Do you worry if he will be walked or stuck in a cage all day?
Do you worry if he will think you abandoned him?
Will you worry if he will be stressed by being in an unfamiliar environment?
Or worry that he may catch something in a kennel or come home depressed?
Have you found yourself wanting to go away for the long weekend or you were suddenly notified that you have to work late and won't get home for another 6 hours? You would come home to a pooch with his legs and eyes crossed.
There are many reasons you may find yourself needing a canine companion or nanny. So If you work long hours and don't want to keep imposing on family, friends or neighbors then pet sitters just might benefit you by stopping in for a mid day break to let little Bentley. I'm sure Bentley will appreciate it!
Now how do you choose a pet sitter?
Word of mouth is a great tool in the community. If a your neighbor has used someone to watch his or her pets and they do a good job then of course word spreads. You would not trust your precious pet with just anybody so do your homework. If you find someone of peotential see if they have any references, names and number of previous customers of there's. Perhaps a website with some testimonials, something that builds their credibility and your trust.
If you find a prospect that you are inquiring on, give them a call and try to arrange a face-to-face or what I like to call a Meet & Greet. This is a great way to find out if you, little Bentley and everyone are all in agreement. This are especially important because even if you are as much of a dog lover as I am not all people and pooches mesh.
Being a pet sitter myself I have come across some of the friendliest dogs but also some very territorial dogs. If you as the owner are not confortable with the way Bentley is acting do not be afraid to try another reference. They will understand and may be able to guide you in the direction of another reputible company.
There should be questions that you are prepaired to ask. I have provided you with some frequent questions that I get asked.
Are you lincensed, bonded and insured?
Do you know pet first-aid?
Are you will to water plants, bring in mail and garbage cans and turn the blinds?
Do you charge extra for additional pets?
How omany times a day will you visit and approximate time frame?
On the same note be prepaired for questions that you will be asked from the as well like what are regular feeding times, the walk schedule and where is everything kept. Also is there anything other than attending to the pets that they would like done such as mail, garbage, lights, watering plants or bathing dogs (usually additional costs).
If you did not have a particular person or company that you were referred to and you want to find pet sitters in your area that you can feel confident with and that are trustworthy and reliable, I would suggest a pet sitting agency. An agency checks the the credibility of a person or company. They check or will provide you the tools to check the back ground of the pet sitter. They also have the ability to send you not just 1 or 2 pet sitters to choose from but a list to choose from.
A pet sitting agency also provides alot of other additional helpful information. They will have lots of tips and advice on pet care for your pooch. They quickly match, screen and select pet sitters for you and when you find a sitter that you are interested in you can communicate through their confidential message system. | english |
<?php
namespace Season\Form\Hydrator;
use Zend\Stdlib\Hydrator\ClassMethods as Hydrator;
use Zend\Stdlib\Hydrator\HydratorInterface;
use Season\Services\RepositoryService;
/**
* Class ParticipantHydrator
*
* @package Season\Form
*/
class ParticipantHydrator implements HydratorInterface
{
private $repository;
/**
* @param RepositoryService $repositoryService
*/
public function __construct(RepositoryService $repositoryService)
{
$this->repository = $repositoryService;
}
/**
* @param \Season\Entity\Season $season
*
* @return array
*/
public function extract($season)
{
return array(
'number' => $season->getNumber(),
'associationName' => $season->getAssociation()->getName(),
'invitedPlayers' => count($this->getSeasonMapper()->getInvitedUsersBySeason($season->getId())),
);
}
/**
* @param array $data
* @param \Season\Entity\Season $season
*
* @return object
*/
public function hydrate(array $data, $season)
{
$playerList = array();
foreach ($data['addPlayer'] as $userId) {
$playerList[] = $this->getUserById($userId);
}
$season->setAvailablePlayers($playerList);
return $season;
}
/**
* @param int $userId
*
* @return \User\Entity\User
*/
private function getUserById($userId)
{
return $this->getSeasonMapper()->getEntityManager()->getReference('User\Entity\User', $userId);
}
/**
* @return \Season\Services\RepositoryService
*/
public function getRepository()
{
return $this->repository;
}
/**
* @return \Season\Mapper\SeasonMapper
*/
private function getSeasonMapper()
{
return $this->getRepository()->getMapper(RepositoryService::SEASON_MAPPER);
}
}
| code |
On the morning of Sun 27th May the Gibraltar Cycling Association will be holding a cycling event (Time Trial) and require 24 x volunteer Race Marshals to help stage the event. This race will be used as a Test Event for the Gibraltar 2019 NatWest International Island Games.
The 24 x Marshals will be positioned at specific key locations and would be given instructions in respect of their duties and responsibilities.
Your help in holding this Test Event is needed. If you are interested in volunteering for the 27th May Test Event please email the Volunteer Manager, Gibraltar 2019 Organising Committee on volunteer@gibraltar2019.com by Thurs 17th May. | english |
मनमोहन की यात्रा ऐतिहासिक : अमेरिका - मनमोहन की यात्रा ऐतिहासिक : अमेरिका, हिन्दी न्यूज - हिन्दुस्तान
प्रधानमंत्री मनमोहन सिंह की यात्रा को ओबामा प्रशासन ने ऐतिहासिक बताते हुए कहा है कि दोनों देशों के कुछ चुनिंदा लक्ष्य इससे हासिल हुए हैं।
विदेश विभाग के प्रवक्ता इयान केली ने संवाददाओं से बातचीत में कहा कि यह वाकई एक ऐतिहासिक दौरा था, जिसमें हमारे कुछ खास लक्ष्य थे। केली सिंह के दौरे से संबंधित सवालों का जवाब दे रहे थे, जिसमें कई सहमति पत्रों पर दस्तखत हुए जबकि विशेषज्ञों के मुताबिक इसमें परमाणु करार से संबंधित ऐलान का अभाव रहा।
केली ने कहा कि मैं सोचता हूं कि हम कुछ महत्वपूर्ण समझौतों पर पहुंचे और हमने यहां विदेश विभाग में कई समझौतों पर दस्तखत किए। विशेषज्ञों का लेकिन मानना है कि दौरे से बड़ी खबर नहीं आई जैसा कि पूर्ववर्ती बुश प्रशासन के दौरान सन २००५ में परमाणु करार को लेकर आई थी।
लेकिन केली ने कहा कि मैं समझता हूं कि मुख्य बात यह है कि इससे अमेरिका के दुनिया के सबसे बड़े लोकतंत्र के साथ बढ़ते संबंध रेखांकित हुए। ऐसे देश के साथ जिसकी न केवल क्षेत्रीय स्तर पर बल्कि विश्व स्तर पर महत्ता बढ़ रही है।
केली ने कहा कि हमने सहयोग और चर्चा खासकर ऊर्जा एवं जलवायु परिवर्तन के क्षेत्र में नए क्षेत्रों की शुरुआत के लिए की। लिहाजा मैं सोचता हूं कि यह एक ऐतिहासिक यात्रा थी, जिसमें हमने चुनिंदा लक्ष्य हासिल किए। उन्होंने कहा कि दीर्घकालिक संदर्भ में महत्वपूर्ण बात यह है कि हम क्षेत्रीय एवं वैश्विक मुद्दों पर एक सामरिक सहभागिता की जरुरत पर सहमत हुए।
वेब तितले:मनमोहन की यात्रा ऐतिहासिक : अमेरिका | hindi |
आजादी के भाषण में क्या बोले मोदी | उर्जांचल टाईगर
नई दिल्ली।। प्रधानमंत्री ने ७१ वें स्वतंत्रता दिवस के अवसर पर लालकिले की प्राचीर से राष्ट्र को आज संबोधित किया। प्रधानमंत्री ने उन महान महिलाओं और पुरूषों को याद किया जिन्होंने भारत की स्वतंत्रता के लिए कड़ा परिश्रम किया था। उन्होंने कहा कि भारत के लोग प्राकृतिक आपदा से प्रभावित लोगों तथा गोरखपुर में हुई त्रासदी के समय कंधे से कंधा मिलाकर उनके साथ रहे हैं।
प्रधानमंत्री ने उल्लेख किया कि इस वर्ष का एक विशेष महत्व है क्योंकि इस वक्त हम भारत छोड़ों आंदोलन की ७५वीं वर्षगांठ,चम्पारण सत्याग्रह की शताब्दी तथा बाल गंगाधर तिलक द्वारा शुरू किए गये सार्वजनिक गणेश उत्सव की १२५वीं वर्षगांठ मना रहे हैं।
प्रधानमंत्री ने कहा कि भारत की स्वतंत्रता के लिए १९४२ और १९४७ के बीच राष्ट्र ने अपनी सामूहिक शक्ति का प्रदर्शन किया था। उन्होंने कहा कि हमें २०२२ तक एक नये भारत का निर्माण करने के लिए उसी सामूहिक दृढ़ता और संकल्प का प्रदर्शन करना होगा। उन्होंने इस बात पर जोर दिया कि हमारे देश में सभी एकसमान हैं और हम गुणात्मक परिवर्तन ला सकते हैं।
प्रधानमंत्री ने इस बात का आह्वान किया कि चलता है का रवैया अब खत्म होना चाहिए और इसकी जगह सकारात्मक परिवर्तन के लिए बदल सकता है का दृष्टिकोण सामने आना चाहिए। श्री नरेंद्र मोदी ने कहा कि भारत की सुरक्षा हमारी प्राथमिकता है और सर्जिकल स्ट्राइक ने इसे रेखांकित किया है। उन्होंने यह भी कहा कि विश्व में भारत का स्थान ऊंचाईयों को छू रहा है और अनेक देश आतंकवाद की बुराई से लड़ने में भारत के साथ सहयोग कर रहे हैं। विमुद्रीकरण के मुद्दे पर उन्होंने कहा कि जिन्होंने देश और गरीबों को लूटा है वह शांति से सो नहीं पाएंगे और आज ईमानदारी का उत्सव मनाया जा रहा है। उन्होंने इस बात को स्वीकार किया कि कालेधन के खिलाफ लड़ाई जारी रहेगी और प्रौद्योगिकी के माध्यम से पारदर्शिता लाने में मदद मिलेगी। उन्होंने लोगों को डिजिटल लेनदेन को बढ़ावा देने की बात कही। प्रधानमंत्री ने जीएसटी क्रियान्वयन को सरकारी संघवाद की कुंजी के रूप में परिभाषित किया। उन्होंने कहा कि वित्तीय सहभागिता के माध्यम से गरीब लोग मुख्य धारा से जुड़ रहे हैं।
उन्होंने जोर दिया कि गुड गवर्नेंस प्रक्रियाओं में गति और सरलता का ही रूप है। जम्मू-कश्मीर के मुद्दे पर प्रधानमंत्री ने इस बात पर जोर दिया कि न तो कोसने से और न ही गोलीबारी से समस्या का समाधान निकलेगा, बल्कि वहां के आवाम को गले लगाने से राज्य की समस्याओं को हल किया जा सकता है। (न गाली से, न गोली से, परिवर्तन होगा गले लगाने से)
नये भारत के लिए अपने दृष्टिकोण को स्पष्ट करते हुए प्रधानमंत्री ने कहातंत्र से लोक नहीं लोक से तंत्र चलेगा अर्थात् इस काम के निर्वहन में जनता ही वह ताकत होगी जो इसे गतिशील बनाएगी। प्रधानमंत्री ने इस वर्ष रिकॉर्ड फसल उतपादन के लिए किसानों और कृषि वैज्ञानिकों की सराहना की। उन्होंने कहा कि गत वर्ष की तुलना में सरकार ने इस वर्ष १६ लाख टन दालों की कहीं ज्यादा खरीद की है।
प्रधानमंत्री ने कहा कि रोजगार के लिए विभिन्न दक्षता की जरूरत होती है और प्रौद्योगिकी के स्वरूप में परिवर्तन आ रहा है। उन्होंने यह भी कहा कि युवाओं को नौकरी पैदा करने के लिए प्रशिक्षित किया जा रहा है न कि नौकरी मांगने के लिए। प्रधानमंत्री ने तीन तलाक के फलस्वरूप पीडि़त महिलाओं का उल्लेख करते हुए कहा कि इस कुप्रथा के खिलाफ साहस दिखाने वाली महिलाओं की वह प्रशंसा करते हैं और इस संघर्ष में पूरा देश उनके साथ खड़ा है। प्रधानमंत्री ने कहा कि भारत से अभिप्राय शांति, एकता और सद्भाव है। उन्होंने कहा कि जातिवाद और साम्प्रदायिकता हमारे लिए मददगार नहीं हैं। उन्होंने आस्था के नाम पर हिंसा के उपयोग की भर्त्सना की और भारत में इसे स्वीकार नहीं किया जाएगा। उन्होंने कहा कि भारत छोड़ों का आंदोलन भारत छोड़ों अभियान था, लेकिन आज यह आह्वान भारत जोड़ो का है।
प्रधानमंत्री ने कहा कि पूर्व और पूर्वोत्तर भारत के विकास के लिए पर्याप्त ध्यान दिया जा रहा है। उन्होंने कहा कि भारत ने कोई ढिलाई बरते बगैर विकास के नये मार्ग चुने हैं। एक उद्धरण का उल्लेख करते हुए प्रधानमंत्री ने कहा कि यदि हम सही दिशा में सही कदम नहीं उठाते तो हमें अपेक्षित परिणाम प्राप्त नहीं होंगे। उन्होंने कहा कि टीम इंडिया के लिए न्यू इंडिया का संकल्प लेने का यह सही समय है।
उन्होंने एक ऐसे नये भारत का आह्वान किया जहां गरीबों के पास घर होगा पानी और बिजली होगी, जहां किसान चिंताओं से मुक्त होंगे और आज की अपेक्षा दोगुना कमाएंगे। युवा और महिलाओं को अपने सपने साकार करने के पर्याप्त अवसर होंगे और एक ऐसा भारत होगा जो आतंकवाद साम्प्रदायिकता, जातिवाद भ्रष्टाचार और भाई-भतीजावाद से मुक्त होगा और एक ऐसा भारत स्वच्छ और स्वस्थ होगा।
प्रधानमंत्री ने वीरता पुरस्कार विजेताओं के सम्मान में एक वेबसाइट का शुभांरभ किया।
उर्जांचल टाइगर १९:३४ | hindi |
\begin{document}
\title{Reconstruction-free quantum sensing of arbitrary waveforms}
\author{J. Zopes and C. L. Degen$^1$}
\affiliation{$^1$Department of Physics, ETH Zurich, Otto Stern Weg 1, 8093 Zurich, Switzerland.}
\email{degenc@ethz.ch}
\begin{abstract}
We present a protocol for directly detecting time-dependent magnetic field waveforms with a quantum two-level system. Our method is based on a differential refocusing of segments of the waveform using spin echoes. The sequence can be repeated to increase the sensitivity to small signals. The frequency bandwidth is intrinsically limited by the duration of the refocusing pulses. We demonstrate detection of arbitrary waveforms with $\sim 20\unit{ns}$ time resolution and $\sim 4\unit{\,\mu{\rm T}/\sqrt{\mr{Hz}}}$ field sensitivity using the electronic spin of a single nitrogen-vacancy center in diamond.
\end{abstract}
\date{\today}
\maketitle
Well-controlled two-level quantum systems with long coherence times have proven useful for precision sensing \cite{budker07,degen17} of various physical quantities including temperature \cite{kucsko13}, pressure \cite{doherty13}, or electric \cite{dolde11} and magnetic fields \cite{loretz13,zopes17}. By devising suitable coherent control sequences, such as dynamical decoupling \cite{delange10}, quantum sensing has been extended to time-varying signals. In particular, coherent control schemes have allowed the recording of frequency spectra \cite{bylander11,schmitt17,boss17} and lock-in measurements of harmonic test signals \cite{kotler11}.
A more general task is the recording of arbitrary waveform signals, in analogy to the oscilloscope in electronic test and measurement. In this case, conventional dynamical decoupling sequences are no longer the method of choice as the sensor output is non-trivially connected to the input waveform signal, requiring alternative sensing approaches.
For slowly varying signals, the transition frequency of the sensor can be tracked in real time \cite{schoenfeld11}, permitting detection of arbitrary waveforms in a single shot. By using a large ensemble of quantum sensors detection bandwidths of up to $\sim 1\unit{MHz}$ have been demonstrated \cite{dezanche08,shin12},
with applications in MRI tomograph stabilization \cite{dezanche08}, neural signaling \cite{jensen16,barry16}, or magnetoencephalography \cite{xia06}.
For rapidly changing signals the waveform can no longer be tracked, and a general waveform cannot be recorded in a single shot. However, if a waveform is repetitive or can be re-triggered, multiple passages of the waveform can be combined to reconstruct the full waveform signal. This method, known as equivalent-time sampling, is routinely implemented in digital oscilloscopes to capture signals at effective sampling rates that are much higher than the rate of analog-to-digital conversion.
In quantum sensing, one possibility is to record a series of time-resolved spectra that cover the duration of the waveform \cite{zopes18prl}. This method, however, is limited to strong signals because the spectral resolution inversely scales with the time resolution. Other approaches include pulsed Ramsey detection \cite{balasubramanian09}, Walsh dynamical decoupling \cite{magesan13,cooper14}, and Haar wavelet sampling \cite{Xu16}, discussed below. These methods use coherent control of the sensor to achieve competitive sensitivities, but require some form of waveform reconstruction.
In this Letter we experimentally demonstrate a simple quantum sensing sequence for directly recording time-dependent magnetic fields with no need for signal reconstruction. Our method uses a spin echo to differentially detect short segments of the waveform, and achieves simultaneous high magnetic field sensitivity and high time resolution. The only constraints are that the waveform can be triggered twice within the coherence time of the sensor, and that the signal amplitude remains within the excitation bandwidth of qubit control pulses.
Possible applications include the \textit{in situ} calibration of miniature radio-frequency transmitters \cite{sasaki18,zopes18prl}, activity mapping in integrated circuits \cite{nowodzinski15}, detection of pulsed photocurrents \cite{zhou19}, and magnetic switching in thin films \cite{baumgartner17}.
\begin{figure}
\caption{\normalfont
\textbf{Schemes for equivalent-time waveform sampling by a quantum sensor.}
\label{fig:fig1}
\end{figure}
To motivate our measurement protocol we first inspect the interferometric Ramsey method, which has been a standard method for early quantum sensing of waveforms \cite{balasubramanian09}. In a Ramsey experiment a superposition state, prepared by a first $\pi/2$ pulse, evolves during a sensing time $t$ and acquires a phase factor $\phi(t)$ that is proportional to the transition frequency $\omega_0$ between ground and excited states (see Fig. \ref{fig:fig1}(b)). For a spin sensor, where $\omega_0$ is proportional to the component of the magnetic field along the spin's quantization axis, the acquired phase is
\begin{align}
\phi(t) = \int_{0}^{t} \gamma_\mr{e} B(t') dt'.
\label{eq:ramsey}
\end{align}
Here, $B(t)$ is the time-dependent magnetic field that we aim to measure and $\gamma_\mr{e}$ is the gyromagnetic ratio of the spin. To extract the phase, $\phi(t)$ is typically converted into a population difference $p(t)$ by a second $\pi/2$ pulse,
\begin{align}
p(t) = \frac{1}{2} (1+\sin(\phi(t))) \overset{\phi \ll 1}{\approx} \frac{1}{2} (1+\phi(t)).
\label{eq:linsensor}
\end{align}
followed by a projective readout of the sensor and signal averaging \cite{degen17}. By measuring $p(t)$ as a function of $t$, one thus effectively measures the integral of the magnetic field in the interval $[0,t]$. Using a numerical derivative the magnetic field can subsequently be reconstructed \cite{balasubramanian09}. However, this reconstruction greatly increases noise due to the derivative \cite{knowles14} and often requires phase unwrapping.
A more direct method that avoids numerical processing is the sampling of the waveform in small intervals $t_\mr{int}$ and to build up the waveform by stepping $t$. The simplest approach is use a Ramsey sequence with a very short integration time $t_\mr{int}$ (Fig. \ref{fig:fig1}(c)). In this case the sensor phase $\phi(t)$ encodes the field in the time interval $[t,t+t_\mr{int}]$,
\begin{align}
\phi(t) = \int_{t}^{t+t_\mr{int}} \gamma_\mr{e} B(t') dt' \approx \gamma_\mr{e} B(t) t_\mr{int} \ ,
\end{align}
without the need for numerical post-processing. Thanks to the short $t_\mr{int}$ one can often take advantage of the linear approximation ($\sin\phi\approx \phi$) in Eq. (\ref{eq:linsensor}). The short $t_\mr{int}$, however, impairs sensitivity because $\phi \propto t_\mr{int}$.
To maintain adequate sensitivity even for short $t_\mr{int}$ we introduce a detection protocol that accumulates phase from several consecutive waveform passages. Our scheme requires that the repetition time is short, $t_\mr{rep}\ll T_2$, where $T_2$ is the sensor's coherence time, which is often the case for fast waveform signals. Our protocol is shown in Fig. \ref{fig:fig1}(d): By inserting two $\pi$ pulses at times $t$ and $t+t_\mr{int}$ relative to two consecutive waveform triggers, we selectively acquire phase from the time interval $[t,t+t_\mr{int}]$ while canceling all other phase accumulation.
A similar scheme of partial phase cancellation has been implemented with digital Walsh filters \cite{cooper14} and Haar functions \cite{Xu16} via a sequence of $\pi$ rotations.
The linear recombination of sensor outputs in such waveform sampling, however, is prone to introducing errors, especially for rapidly varying signals whose detection requires many $\pi$ pulses \cite{magesan13}.
In our scheme, the $\pi$ rotations effectively act as an \textit{in situ} derivative to the phase integral (Eq. \ref{eq:ramsey}), bypassing the need for a later numerical differentiation or reconstruction. To further amplify the signal, the basic two-$\pi$-pulse block can be repeated $k$ times to accumulate phase from $2k$ waveform passages, up to a limit set by $2kt_\mr{rep} \leq T_2$. The amplified signal is (in linear approximation)
\begin{align}
p(t) \approx 0.5 + 2 k\gamma_\mr{e} B(t) t_\mr{int} \ ,
\label{eq:phik}
\end{align}
and when converted to units of magnetic field,
\begin{align}
B(t) \approx \frac{p(t)-0.5}{ 2 k \gamma_\mr{e} t_\mr{int}} \ .
\end{align}
\begin{figure}
\caption{\normalfont
\textbf{Pulse detection and time resolution.}
\label{fig:fig2}
\end{figure}
\begin{figure*}
\caption{\normalfont
\textbf{Increased sensitivity by integrating $2k$ waveform passages.}
\label{fig:fig3}
\end{figure*}
We experimentally demonstrate arbitrary waveform sampling using the electronic spin of a single nitrogen-vacancy (NV) center in a diamond single crystal. The NV spin is initialized and read out using $\sim 2\unit{\,\mu{\rm s}}$ green laser pulses and a single-photon-counting module \cite{loretz13}. Microwave control pulses are generated by an arbitrary waveform generator (AWG), amplified to reach Rabi frequencies of $\sim 25\unit{MHz}$, and applied to the NV center via a coplanar waveguide (CPW) structure \cite{zopes17}.
Test magnetic waveforms are generated by a second function generator operated in burst mode and triggered by the AWG. The test signals are delivered to the NV center either by injecting them into the common CPW using a bias-T \cite{rosskopf17} or by an auxiliary nearby microcoil \cite{zopes18prl,zopes18ncomms}.
The setup is operated in a magnetic bias field of $43\unit{mT}$ (aligned with the N-V crystal direction) to isolate the $\{m_s=0,m_s=-1\}$ manifold of the $S=1$ NV spin, and to achieve preferential alignment of the intrinsic nitrogen nuclear spin (here the spin 1/2 of the \NN isotope) \cite{jacques09}. The latter is not required for our scheme, but helps reducing microwave pulse errors.
We begin our study by recording a simple, 270-ns-long square waveform (Fig. \ref{fig:fig2}). We record the waveform both using the standard integrative Ramsey scheme [Fig. \ref{fig:fig1}(b)] and our differential sampling technique [Fig. \ref{fig:fig1}(d)].
For the Ramsey scheme, we reconstruct the magnetic waveform by a numerical differentiation of the raw signal (black data in Fig. \ref{fig:fig2}(a)) via the central difference quotient of the smoothed signal \cite{jordan17}. The reconstructed waveform is shown in blue.
For our differential detection scheme, we directly plot the signal output without any further data processing (Fig. \ref{fig:fig2}(b)). Clearly, the differential sampling method is able to faithfully reproduce the square pulse and is not affected by the noise amplification of the Ramsey scheme.
To characterize the time resolution of the method, we record the rising edge of the pulse with fine sampling $t_s = 4\unit{ns}$ (Fig. \ref{fig:fig2}(c)). We find a 10-90\% step response time of $\tau \sim 20\unit{ns}$. The response time is approximately given by $\tau \approx \max(t_\pi,t_\mr{int})$, since the finite pulse duration and the integration time both act as moving average filters. While $t_\mr{int}$ can be deliberately adjusted, $t_\pi$ is determined by the Rabi frequency of the system and sets a hard limit to the response time.
In Fig. \ref{fig:fig2}(d) we show the corresponding frequency transfer function $G(\omega)$ of the sensor, \ie, the Fourier transform of the unit impulse response obtained from the step response. In our experiments, where $t_\mr{int} = t_\pi$, the unit impulse response of the sensor is approximately given by a Hann function with characteristic length $2t_\pi$ \cite{supplementary}. The Bode plot indicates a -3dB sensor bandwidth $f_{-3\mathrm{dB}} \approx 25\unit{MHz}$, with good agreement between theory and experiments.
This bandwidth could be slightly increased, up to $\sim 40\unit{MHz}$ \cite{supplementary}, by choosing shorter integration times $t_\mr{int}\llt_\pi$; however, the short integration time comes with the penalty of vanishing sensitivity.
In a next step, we investigate the signal gain possible by accumulating phase from $2k$ consecutive waveform passages. Fig. \ref{fig:fig3}(a) plots the sensor response from a weak sinusoidal test signal recorded with $k=1,2,4$ and $8$. Clearly, a much larger oscilloscope response results for higher $k$ values.
To estimate the signal gain, we plot the peak sensor signal $\Delta p_\mr{max}$ (indicated in (a)) as a function of $k$, see Fig. \ref{fig:fig3}(b). At small $k$ values the increase of $\Delta p_\mr{max}$ is proportional to $k$, as expected, while at larger $k$ decoherence of the sensor attenuates the signal. By correcting for sensor decoherence, we can recover the almost exact linear scaling of the signal phase $\Delta\phi_\mr{max}$ with $k$ (dashed line in (b)).
To quantify the overall sensitivity in the presence of decoherence and sensor readout overhead, we calculate a minimum detectable field $B_\mr{min}$, defined as the input field that gives unity signal-to-noise ratio for a one-second integration time. $B_\mr{min}$ is given by \cite{degen17},
\begin{align}
B_\mr{min} = \frac{\sqrt{t_\mr{m}+2 k t_\mr{rep}} e^{\frac{2 k t_\mr{rep}}{T_2}}}{2 \gamma_\mr{e} k C t_\mr{int}} \ ,
\label{eq:Bmin}
\end{align}
where $t_\mr{m} = 3\unit{\,\mu{\rm s}}$ is the arm/readout duration (see Fig. \ref{fig:fig1}(c)), $T_2\sim 14\unit{\,\mu{\rm s}}$ is the coherence time, and $C\sim 0.02$ is a dimensionless number that quantifies the quantum readout efficiency \cite{degen17}.
In Fig. \ref{fig:fig3}(c) we plot $B_\mr{min}$ as a function of $k$. We find that $B_\mr{min} \propto k^{-1}$ for short durations $kt_\mr{rep} < t_\mr{m}$, that is, the benefit of repeating the sequence is largest for small $k$ and high repetition rates (dotted curve). Once $kt_\mr{rep} > t_\mr{m}$ the scaling reduces to $B_\mr{min} \propto k^{-0.5}$ because the linear phase accumulation now competes with standard signal averaging (dashed curve). For large $kt_\mr{rep}$ that exceed the sensor coherence time $T_2$ the efficiency of the method rapidly deteriorates (dash-dotted curve).
\begin{figure}
\caption{\normalfont \textbf{Example of arbitrary waveform detection.}
\label{fig:fig4}
\end{figure}
We complete our study by demonstrating detection of a complex test waveform (Fig. \ref{fig:fig4}). The waveform contains the sum of several Fourier components with the analytical expression for $B(t)$ given in the figure caption. In Fig. \ref{fig:fig4}(a) we show the experimentally measured waveform (light blue points) together with the input waveform (dashed black line) in the same plot. The experimental waveform consists of $N=280$ data points sampled at $t_s = 4\unit{ns}$ horizontal resolution. Clearly, the experimental waveform agrees very well with the applied input. The experimental data are plotted without any data processing, demonstrating that our differential sampling method directly reproduces the waveform signal. Fig. \ref{fig:fig4} (b) further presents the corresponding power spectra of the input waveform (black dashed line) and the recorded sensor output (light blue points). Although the signal lies within the analog bandwidth of the sensor ($\sim 25\unit{MHz}$), some attenuation is observed at higher frequencies. If desired, inverse filtering techniques could be applied to compensate the high-frequency roll-off of the sensor.
Before concluding, we point out a few limitations and possible remedies of the differential waveform sampling technique.
First, our scheme is only applicable to waveforms that can be triggered twice within the sensors $T_2$ time. While $T_2$ could be extended to some extent by adding dynamical decoupling $\pi$ pulses to our protocol, very long repetition times cannot be covered, and will require resorting to, \eg, the inefficient small-interval Ramsey technique (Fig. \ref{fig:fig1}(c)).
Second, the maximum peak-to-peak signal amplitude is limited by the excitation bandwidth of $\pi$ pulses to $(\gamma_\mr{e}t_\pi)^{-1}$, here $\sim 2\unit{mT}$. Only relatively weak fields can therefore be detected with our method. To cover strong signals, time-resolved spectroscopy techniques are available \cite{zopes18prl}.
Third, when accumulating signal over many passages $k$, the phase may exceed the sensor's linear range (see Eq. \ref{eq:ramsey}). In this situation, the relative phase of the second $\pi/2$ pulse could be cycled \cite{knowles16} to recover a linear response.
In summary, we have presented a quantum sensing method for direct detection of arbitrary waveforms in the time domain using equivalent time sampling. Our method does not require any waveform reconstruction, allowing, for example, to sample arbitrary segments from a longer waveform. In addition, our protocol can be repeated to coherently accumulate phase from many waveform cycles to improve sensitivity. The analog bandwidth of our scheme is fundamentally limited by the Rabi frequency of the sensor, which sets the minimum $\pi$ pulse duration $t_\pi$. In the present work, we demonstrate a time resolution of $t_\pi \sim 20\unit{ns}$ using a Rabi frequency of $\sim 25\unit{MHz}$. To achieve better time resolution, the Rabi frequency could be increased by more than an one order of magnitude by miniaturizing the coplanar waveguide \cite{fuchs09,kong18}. The highest demonstrated Rabi frequencies are $200-500\unit{MHz}$ for NV centers, corresponding to $t_\pi = 1-2.5\unit{ns}$ \cite{fuchs09,kong18}. At this time resolution it may become feasible to study the photoresponse in materials \cite{zhou19} or the switching in thin film magnetic memory devices \cite{baumgartner17}.
We thank Pol Welter, Martin W\"ornle and Konstantin Herb for helpful discussions.
This work has been supported by Swiss National Science Foundation (SNFS) Project Grant No. 200020\_175600, the National Center of Competence in Research in Quantum Science and Technology (NCCR QSIT), and the Advancing Science and TEchnology thRough dIamond Quantum Sensing (ASTERQIS) program, Grant No. 820394, of the European Commission.
\input{"references.bbl"}
\end{document} | math |
یِم بٟٹھۍ کُلۍ گۄگرس منٛز تہٕ لٔگۍ شہزادِ ہٕنٛدِس مٲلۍ سٕنٛدِس محلہٕ خانس کُن ہَو ہَس منٛز وُڈو کَرنہِ | kashmiri |
We’re pleased to introduce District 4 candidate Libby Schaaf. She provided responses to questions Today in Montclair posed in April 2010, below.
Born and raised in District 4, I’ve centered my life and career on building community, solving problems, and leading change in my hometown. I’m running for City Council because I love this community. I want to use my experience, creativity and persistence to help us fulfill our amazing potential – to make change you can see and feel.
I’ll work hard every day to reduce crime, build a thriving local economy, support our schools, and make our government more responsive and responsible. As a life-long Oakland resident and mother of young children, my decisions will be driven by this community’s long-term interests.
Oakland is suffering from the most severe economic crisis of our time. I believe my extensive knowledge of government and effectiveness as a community-organizer is needed now more than ever.
Oakland faces painful decisions. I am the only candidate with a deep understanding of the City’s complex services and budget, along with a network of community resources. I won’t waste time getting up to speed or learning the ropes — I know what’s broken and have concrete plans on how to fix it. From day one, I’ll focus my time and energy on community concerns in the District.
Public service is in my DNA. I spent my youth earning Girl Scout service badges, interning at the Oakland Zoo, playing characters at Children’s Fairyland and volunteering as a Ranger’s Aide at Joaquin Miller Park. I graduated from Skyline High, earned a political science degree from Rollins College and law degree from Loyola.
As an attorney in my late 20s, my mom and I founded the non-profit Oakland Cares, which organized and implemented hundreds of volunteer community improvement projects across Oakland. I found my calling was in public service, so left a lucrative career at Oakland’s largest law firm to go create the first centralized volunteer program for Oakland public schools at the Marcus Foster Educational Institute. There, I placed more than 4,000 volunteers into Oakland classrooms and led the most successful NetDay Technology Volunteer effort in the country.
I was later recruited by Council President Ignacio De La Fuente to serve as his Chief of Staff and then by Mayor Jerry Brown. As Brown’s point-person on Violence Prevention, I successfully led the community input process on the Measure Y Violence Prevention Plan and served on the Project Choice Re-entry Steering Committee, whose juvenile parolees had an 83% lower recidivism rate.
Later as Public Affairs Director for the Port of Oakland, I helped secure millions in state and federal funds for community-driven pollution reduction programs. I most recently served as Senior Policy Advisor for Economic Development to the Oakland City Council.
Over my 14 years in local government, some my favorite accomplishments include: reclaiming a waterfront brown-field to build Union Point Park; championing Oakland’s first transit-oriented development at Fruitvale Village; building and supporting some of the highest-performing new public schools in Oakland, revitalizing the Park Blvd. median strip; authoring legislation to protect neighborhood commercial areas from big box superstores; streamlining bureaucracy; and promoting Oakland as the greatest place to live, work, play and do business!
Even during this challenging career, I’ve managed to serve as a Board Director or Advisor to twelve Oakland non-profits and have been appointed to three City of Oakland Commissions — all fueled by my passion for the arts, education, civic engagement and social justice.
Although I’m now enjoying the added responsibility of raising two small children, I recently revived the Bridgeview Neighborhood Watch in Oakmore and continue to volunteer for Make Oakland Better Now!, the Oakland Schools Foundation, the Museum of Children’s Art (MOCHA) and the League of Women Voters of Oakland.
People should vote for me because I bring the most extensive understanding of local government, the best track-record of implementing community-driven projects and legislation, and – most importantly — a life-time of service and passion for this community.
My top priorities are to reduce crime, build a thriving local economy, and support our public schools. These issues are inextricably connected.
In my first months, I would introduce legislation providing Chief Batts with the managerial flexibility he needs to expand community policing and support other progressive initiatives. I would champion better integration of prevention and enforcement efforts and use of civilians in the police department. I will continue supporting community organizing by creating new Neighborhood Watch groups and helping activate existing ones.
I would forge a partnership with District 4’s school board member, explore cost-saving operational collaborations and lend resources to truancy prevention efforts.
I would create a campaign to attract new tenants to vacant storefronts based on community preference. I would author an initiative requiring more city processes to be available on-line, as well as an Express Building Permit for residents making minor home improvements.
I’d work my network of private and public funding sources to bring new resources for implementing the many community-driven plans that District 4 residents have developed over recent years – including the school-to-village path, Fruitvale Alive, Montclair Rail Road Trail, Shepard Canyon and Joaquin Miller Park Master Plans, Dimond Tot Lot, Laurel Access to Mills, Maxwell Park & Seminary (LAMMPS) project and more!
First/next phase implementation of neighborhood plans in at least 5 different neighborhoods (including, the new Tot Lot in Dimond Park!), including at least one new source of outside funding.
I would focus on the top priorities for each neighborhood and monitor discretionary spending to ensure even distribution of resources. I would keep an eye on my staff’s time and effort and make sure I have strong community liaisons in each neighborhood. I would work to nurture and develop community leadership to ensure every neighborhood has strong, active voices.
District 4 is connected to other districts in every way. Most District 4 residents work, shop and play throughout Oakland and the region. We hold so many of this city’s greatest assets – our high-performing schools, municipal and East Bay Regional Parks, Woodminster Amphitheater, Chabot Space and Science Center and so much more. We also are interconnected in our challenges. District 4 residents I’ve talked with recognize that violent crime in other parts of the City hurts this community as well. Our neighborhoods, our city, our region, our state, our nation, and our planet – we are all interconnected.
I would support transparent funding mechanisms that ensure a fair-share distribution of resources according to objective criteria. Rather than fighting “others” for limited resources, I would use my knowledge, relationships, and experience to build cooperative relationships that leverage and stretch resources to their fullest potential.
I have a track record of bringing new money to Oakland, and I would do so to improve the District and the City as a whole. I’ve successfully attracted hundreds of millions of private, state and federal funds for projects like Port of Oakland pollution reduction; I-880 operational improvements (including new ped/bike-friendly overpasses at 23rd and 29th Avenues) and Union Point Park. While I’m a tireless advocate known for my persistence, my first focus will always be on collaboration and creativity.
Actually, I plan to continue much of Jean Quan’s work. Like Jean, I have lots of energy and am a hard worker. I believe in spending as much time as possible out of City Hall and in the District.
I share Jean’s commitment to social justice. I’ll continue Jean’s leadership in supporting youth and schools – including combating the sexual exploitation of minors (an issue Jean acknowledges I helped get her involved with). I’ll continue her tradition of organizing neighborhoods through CORE, Neighborhood Watch, NCPCs and community events.
I’ll continue her comprehensive electronic newsletter and build on it with more opportunities for interactive, two-way communications. I’ll continue and expand her Local Heroes recognition program and other means of nurturing and building community leadership and volunteerism.
Like Jean, I’ll take on challenges and deliver changes you can see and feel – like demolishing that seedy motel at Lincoln and MacArthur and replacing it with the new Lincoln Courts senior housing. My years of experience and knowledge of the City will allow me to provide the responsiveness and effectiveness that District 4 residents are accustomed to from my first day on the job!
As a new Councilmember, I will bring more experience with economic development, including specific knowledge of Port-related industries and Oakland’s arts sector. I will bring a track record of converting community plans into completed projects. I will be less likely to support new taxes or set-asides.
I will bring the Council a new generational perspective. If elected, I’d be the Council’s only Oakland native and only parent of young children. I hope my infectious optimism for Oakland and positive spirit will be a welcome addition to the Council.
Kevin Alston, Ken Betts, Claudia Jimenez Burgos, Kevin Cardenas, Vanessa Coleman, Jose Corona, Tom & Sue Davies, Joe DeCredico, Dennis Donnegan, Faith Du Bois, Margo Dunlap, Mike Ferro, Ed Gerber, Corrine Jan, Conway & Leslie Jones, Robert Kidd, Jonathan Klein, Richard & Alice Kulka, Terry Kulka, Glen & Jean Lambertson, Lynette Lee, Lindy Lowe, Daphne Markham, Bernard & Anne Metais, Joyce Meyers, Jim Mittelberger, Annie Mudge, Helen Nicholas, Cameron Polmanteer, Gary & Kathleen Rogers, Lisa Ruhland, Joan Story, Alva Svoboda, Rebecca Lasky Thomas, Anne Campbell Washington, and Gene J. Zahas.
To see my complete list of endorsers and learn more about me, please visit www.libbyforoakland.com or become a fan of “Libby Schaaf for Oakland City Council” on Facebook. I can be reached at (510) 479-7196 or libbyforoakland@gmail.com – I look forward to meeting you soon and hope to have your support! | english |
बायोकिनेसिस और आकर्षण १/६ - तुम्हारा प्यार मेरा दर्द हो जाता है
बॉयोकिनेसिस की मदद से और अधिक आकर्षक कैसे बनें:
यदि आप पहले से ही जानते हैं कि आप आकर्षक हैं, यदि आप जानते हैं कि आपके स्तन, आपके वॉशबोर्ड पेट, आपका बट, आपका चेहरा, आपका चरित्र, आकर्षक हैं, तो आप हैं
पहले यहाँ गलत है
दूसरा क्या आप पहले से ही इन चीजों में ऊर्जा लगाते हैं?
इसका मतलब है, आप जानते हैं, आपकी छाती सुंदर, तंग है। आपके पास दूसरों की तुलना में अच्छा है।
आप इसमें ऊर्जा देते हैं। बोसोम, आप, वह विकीर्ण होता है और सभी आपको और आपके बोसोम को आकर्षक लगते हैं।
लेकिन क्या होगा अगर आपके पास कुछ ऐसा है जो आपको इतना पसंद नहीं है?
आप चश्मे से कैसे आकर्षक हो सकते हैं? एक गंजा सिर? छोटे बालों के साथ?
आप एक पेट के साथ कैसे आकर्षक हो सकते हैं? सेल्युलाईट के साथ? एक खराब कार के साथ? एक घटिया अपार्टमेंट के साथ? एक बुरे काम के साथ?
आप वहां क्या कर सकते हैं?
छोटी बुलेट बिंदुओं पर अब आपको यहां पता चलेगा। यदि आप मूल सिद्धांत को समझते हैं, तो आप सब कुछ आकर्षक बना सकते हैं।
पुरुषों को क्या आकर्षक बनाता है?
अब आप इसके बारे में बहुत कुछ लिख सकते हैं। तुम्हें पता है, ऐसे पुरुष हैं जो अपनी पत्नियों को पीटते हैं, और फिर भी ऐसी महिलाएं हैं जो सोचते हैं कि वे आकर्षक हैं।
ऐसे पुरुष हैं जो मृत बदसूरत दिखते हैं और फिर भी उनके पक्ष में महिलाएं हैं। कोई है जो उसे आकर्षक पाता है।
तो उसके पास कुछ ऐसा है जो इस महिला के लिए आकर्षक है।
हो सकता है कि उसे खूबसूरत लुक की कोई परवाह न हो। लेकिन वह इतने प्यार करने वाले व्यक्ति हैं, एक देखभाल करने वाले चरित्र के साथ, कि वह उसे पसंद करती है। वह उसे आकर्षक लगता है।
महिलाओं को क्या आकर्षक बनाता है?
आप महिलाओं को जानते हैं, जहां वसा हर जगह लटका हुआ है। फिर भी, उनके पास ऐसे पुरुष हैं जो इन महिलाओं को आकर्षक पाते हैं। क्योंकि उनके पास कुछ ऐसा है जो खास है।
मैं यहां भेदभाव नहीं करना चाहता। मैं किसी को यहां नीचे नहीं लाना चाहता।
इसके अलावा मेरा विशेष झुकाव नहीं है, जो मुझे पसंद है और जो मुझे व्यक्त करना पसंद नहीं है, लेकिन बस आपको मूल सिद्धांत दिखाते हैं, कि कैसे कुछ आकर्षक बनाना है, और कुछ उदाहरणों के साथ:
लड़कियों को दिलचस्प लोग मिलते हैं जो शर्मीली होती हैं।
क्योंकि ये लोग बाहर तक ऊर्जा पहुंचाते हैं। वे लड़की के साथ कुछ भी नहीं करना चाहते हैं।
उनकी हिम्मत नहीं होती। वे नहीं जानते कि कैसे व्यवहार किया जाए।
इन विचारों के कारण ऊर्जा बाहर जाती है। वे वास्तव में लड़की या लड़कियों को दूर धकेल देते हैं। ये ऊर्जाएँ लड़कियों को प्राप्त करती हैं और फिर लड़के को आकर्षक लगती हैं।
यह फिर से ऊर्जा के बारे में है।
आकर्षक पुरुष में वे गुण होते हैं जो महिला को बहुत अच्छे लगते हैं।
यदि एक महिला वॉशबोर्ड पेट पर है और वॉशबोर्ड एब्स किसी भी चीज़ से अधिक महत्वपूर्ण है, तो वॉशबोर्ड एब्स वाला आदमी सबसे अच्छा है।
एक पुरुष के रूप में, आप सभी महिलाओं को भी अस्वीकार कर सकते हैं। आप चाहते हैं कि आपके साथ कोई महिला न हो।
आप महिलाओं के साथ कुछ भी नहीं करना चाहते हैं। आप खुद को सुंदर बनाते हैं। आप अपना ख्याल रख रहे हैं। आपके पास एक अच्छा काम है।
लेकिन आप महिलाओं के साथ कुछ भी नहीं करना चाहते हैं। फिर आप महिलाओं के लिए आकर्षक हैं। वे आपको जीतना चाहते हैं। वे यह पता लगाना चाहते हैं कि आप कौन हैं, आपके पास क्या है, आप क्या करते हैं।
आप बस अपने आस-पास सहज महसूस करते हैं। तुम उसे नहीं चाहते। आप अपने क्षेत्र में सहज महसूस करते हैं।
जो ऊर्जा आप से आती है, इस मामले में अस्वीकृति, आपके आसपास की महिलाओं को अच्छा महसूस करने, आपकी ओर देखने, आपकी प्रशंसा करने, आपके साथ कुछ करने के लिए प्रेरित करती है।
हम सभी ऐसे पुरुषों को जानते हैं।
महिलाओं के साथ बिल्कुल वैसा ही। जो महिलाओं को प्यार करते हैं
जो महिलाएं पुरुषों के साथ कुछ भी नहीं करना चाहती हैं, क्योंकि वे सभी बुरे या बुरे हैं, या किसी अन्य कारण से, क्योंकि वे सिर्फ आकर्षक नहीं हैं।
महिलाएं सभी दिशाओं में उन्हें ऊर्जा देती हैं, और कोई फर्क नहीं पड़ता कि वह किस दिशा में एक पुरुष को देखती है, वह उसे आकर्षक लगता है।
पहले वाला सिर्फ उसके साथ सेक्स करना चाहता है। दूसरा उसके साथ रहना चाहता है। तीसरा उसे रूपांतरित करना चाहता है।
जो भी हो। वह आकर्षक है क्योंकि वह हर आदमी को अस्वीकार करती है।
ऊर्जा को विकीर्ण करने वाले केश आकर्षक हैं।
क्या आपने एक नया हेयरस्टाइल बनाया है और आप चाहते हैं कि हर कोई इस हेयरस्टाइल को पसंद करे क्योंकि आपको यह पसंद नहीं है या आप इसे पसंद करते हैं?
कोई भी आपके केश पर ध्यान नहीं देगा। कोई भी उन्हें पसंद नहीं करता।
आप इस के साथ पसंद करते हैं, "मैं चाहता हूं कि मेरे सभी हेयर स्टाइल शानदार दिखें।" तत्काल लोगों के दिमाग से ऊर्जा, और कोई भी नहीं देखता है कि आपके पास एक नया केश है और निश्चित रूप से कोई भी कभी नहीं देखता है कि वह महान है।
जहां सोचा के खिलाफ: "मेरे पास एक महान केश है, हर कोई इसे देखता है।"
इसके चलते हर कोई इसे देखता है। लेकिन अब सावधान!
जिस क्षण आपके पास नौकरी होगी, ब. बहुत छोटे बाल, बहुत छोटे कट, कर्लर्स के साथ, गलत रंग के और आप सोचते हैं, "हे भगवान, उम्मीद है कि कोई भी यह नहीं देखेगा कि मेरे पास यह काम है।"
जिस पल आप सोचते हैं, कि हर कोई पहले की तलाश में है। | hindi |
सबस्क्रिबे तो फिल्मिबिट हिन्दी आर बाल्की के निर्देशन में बनी फिल्म 'की एंड का' का ट्रेलर रिलीज कर दिया गया है। और हमें पता है कि इस पोस्टर को देखने के बाद आप भी यही बोलेंगे। इस कपूर + कपूर की जोड़ी काफी डिफ्रेंट है। 'की एंड का' १ अप्रैल को रिलीज होने वाली है। काफी समय के बाद करीना कपूर किसी फिल्म में दमदार भूमिका में दिखने वाली हैं, वहीं अर्जुन कपूर भी काफी अलग अवतार में नजर आने वाले हैं। [अजय देवगन का दूसरा पहलू..जब काजोल के खींचे कान !] बता दें, फिल्म महिला व्स पुरूष थीम पर है। अर्जुन फिल्म में हाउस हस्बैंड बने हैं, जबकि करीना वर्किंग वुमैन जो फाइनेशियली घर की पूरी जिम्मेदारी उठाती है। फिल्म में करीना कपूर और अर्जुन कपूर की केमेस्ट्री वाकई देखने लायक है। ट्रेलर में भी दोनों की केमेस्ट्री काफी पसंद की जा रही है। अप्रैल अंक के फिल्मफेयर ने कवर फोटो के लिए अर्जुन कपूर और करीना कपूर को चुना। दोनों की शानदार केमेस्ट्री यहां भी नजर आ रही है। देखिए इनकी फोटोशूट की तस्वीरें और साथ हीं जानिये फिल्म के बारे में। की एंड का... दोनों ही किरदारों का इंट्रोडक्शन ज़बर्दस्त है - की यानि लड़की किया को एक कॉरपोरेट में नौकरी करने वाली लड़की जो कुछ सालों में सीईओ बनना चाहती है। वहीं कबीर अपनी मां जैसे बनना चाहता है। यानि हाउसवाइफ! फिल्म में दोनों की केमेस्ट्री काफी पसंद की जा रही है। कवर फोटो करीना और अर्जुन ने ये फोटोशूट अप्रेल अंक के लिए करवाई है।ये मैगजीन का कवरपेज है। शानदार केमेस्ट्री दोनों की शानदार केमेस्ट्री काफी चर्चा का विषय बनी हुई है। फिल्म में दोनों का किसिंग सीन भी है। अर्जुन हैं हाउस हसबैंड.. फिल्म में अर्जुन कपूर हाउस हसबैंड बने हैं जो घर खर्च से लेकर रेस्टॉरेंट का बिल तक अपनी पत्नी या करीना से मांगते हैं। मंगलसूत्र वाले अर्जुन दोनों की शादी के दौरान करीना वाकई अर्जुन को मंगलसूत्र पहनाती हैं! खूबसूरत करीना पूरी फोटोशूट में करीना कपूर काफी खूबसूरत लगी हैं। फिल्म में भी करीना के लुक्स की काफी तारीफ हो रही है। परफेक्ट करीना करीना कपूर की काफी समय के बाद ऐसी कोई मूवी आ रही है जो कॉ़मेडी, इमोशन से भरी हो। जाहिर है फिल्म आर.बाल्कि की है तो बड़ी बात वो हल्के-फुल्के अंदाज में समझाने की कोशिश करेंगे। सैफ को आया गुस्सा चर्चाएं तो ऐसी भी हैं कि सैफ को अर्जुन कपूर-करीना की केमेस्ट्री खास पसंद नहीं आ रही। जाहिर है दोनों लगे इतने शानदार है कि किसी भी हसबैंड को नहीं अच्छा लगेगा। करीना कपूर इस फोटोशूट और ट्रेलर में तो दोनों की केमेस्ट्री शानदार लगी है बाकि कुछ तो फिल्म आने के बाद पता चलेगा। इन्हें नहीं होता भरोसा.. मस्त रेड इन्हें नहीं होता भरोसा..कर बैठते हैं दूसरों पर शक की एंड का करीना कपूर अर्जुन कपूर ओवर्वीव स्टोरी फन स्पेक फन-टास्टीक फोटो फोटो विदियोस कस्ट & क्रू वॉलपेपर्स ओवर्वीव बायोग्राफी फन स्पेक फन-टास्टीक फोटो फोटो विदियोस उपकोमिंग मूवीस वॉलपेपर्स ओवर्वीव बायोग्राफी फन स्पेक फन-टास्टीक फोटो फोटो विदियोस उपकोमिंग मूवीस वॉलपेपर्स रेड मोर आबात: की एंड का, करीना कपूर, अर्जुन कपूर, बॉलीवुड, फिल्में, फोटोशूट इंग्लिश सुमेरी र बल्कि हस गिवेन बॉलिवुड येट अनोथर हॉट ऑन-स्क्रीन जोड़ी ई.ए., करीना कपूर खान एंड अर्जुन कपूर. थे दुओ, वो हस टीमद उप फॉर थे उपकोमिंग फिल्म, की एंड का हवे नो ग्रेस्ड थे कोवर ऑफ फिल्म्फरे मगजीन फॉर अप्रैल इसऊ. | hindi |
भूपेश ने पत्रकारों को ज्यादा से ज्यादा अधिमान्यता की जरूरत बताई क्ग न्यूज | छत्तीसगढ़ न्यूज
होम / मैंस्लाइड / भूपेश ने पत्रकारों को ज्यादा से ज्यादा अधिमान्यता की जरूरत बताई
बिलासपुर १७जून।छत्तीसगढ़ के मुख्यमंत्री भूपेश बघेल ने रियायती दर पर आवास दिलाने का भरोसा दिलाते हुए कहा कि प्रदेश के पत्रकारों को अधिकाधिक संख्या में अधिमान्यता मिलनी चाहिए।
श्री बघेल ने आज यहां बिलासपुर प्रेस क्लब की नयी कार्यकारिणी के शपथ ग्रहण समारोह को सम्बोधित करते हुए कहा कि राज्य के अनेक हिस्से नक्सल प्रभावित हैं और ग्रामीण क्षेत्रों में भी अनेक पत्रकार निष्ठाभाव से काम कर रहे हैं। ग्रामीण क्षेत्रों में कहीं भी बीमारी फैलती है, जंगल में आग लगती है, इसकी जानकारी हमें सबसे पहले मीडिया से ही मिलती है। मगर यह सूचना सामने लाने वाले पत्रकारों के पास अधिमान्यता नहीं है। जब उनके ऊपर कोई समस्या आती है तो बचाव के लिए उनके पास कोई प्रमाण नहीं होता है। हमारा मानना है कि अधिक संख्या में पत्रकारों को अधिमान्यता मिलनी चाहिए।
प्रेस क्लब की ओर से रखी गई मांगों पर श्री बघेल ने कहा कि रियायती आवास के लिए नियमों का परीक्षण किया जायेगा। इसके लिए वित्त विभाग को विषय विचार के लिए भेजा जायेगा, आवश्यक होने पर मंत्रिमंडल से भी स्वीकृति ली जायेगी और पत्रकारों को लाभान्वित अवश्य किया जायेगा।
श्री बघेल ने सरकार के निर्णयों की जानकारी देते हुए बताया कि अब पत्रकार सम्मान निधि योजना के तहत सेवानिवृत्त पत्रकारों को पांच हजार रुपये के स्थान पर १० हजार रुपये प्रति माह पेंशन के रूप में सम्मान राशि दी जायेगी। पहले यह प्रावधान पांच वर्ष के लिए था, जिसे अब आजीवन प्रदान किया जायेगा। इसके अतिरिक्त पत्रकार कल्याण कोष के तहत चिकित्सा सुविधा के लिए पूर्व में ५० हजार रुपये तक अधिकतम स्वीकृति दी जाती थी, जिसे बढ़ाकर अब दो लाख रुपये कर दिये गए हैं।
कार्यक्रम की अध्यक्षता करते हुए विधानसभा में नेता प्रतिपक्ष धरम लाल कौशिक ने कहा कि संविधान के तीन स्तंभों न्यायपालिका, कार्यपालिका और विधायिका के अलावा मीडिया चौथा स्तंभ है।सभी मिलकर कार्य करेंगे तो छत्तीसगढ़ को हम नई ऊंचाई तक पहुंचाएंगे। लोरमी विधायक धर्मजीत सिंह ठाकुर ने भी कार्यक्रम को सम्बोधित किया। | hindi |
इर्क्टक का शेयर बसे पर ६४४ रुपये और एनएसई पर ६२६ रुपये पर लिस्ट हुआ है।
आज एक और शानदार लिस्टिंग हुई है। इर्क्टक का शेयर बसे पर ६४४ रुपये और एनएसई पर ६२६ रुपये पर लिस्ट हुआ है। इर्क्टक की लिस्टिंग करीब ९६ फीसदी प्रीमियम पर हुई है।
इर्क्टक ने शेयर बाजार में धमाकेादार एंट्री मारी है। ये पिछले १० साल की दूसरी सबसे बड़ी लिस्टिंग है। इर्क्टक इपो का इश्यू प्राइस ३२० रुपये प्रति शेयर था। ये इश्यू ११२ गुना भरा था। इसका रिटेल हिस्सा १४ गुना भरा था। वहीं, किब हिस्सा १०9 गुना और ही हिस्सा ३५४ गुना भरा था।
इर्क्टक की खास बातें
इर्क्टक रेलवे में केटरिंग की सर्विस देती है। इसके साथ ही ऑनलाइन टिकट बुकिंग और पैकेज्ड ड्रिंक वाटर बेचती है। इर्क्टक एशिया-पेसिफिक की व्यस्ततम वेबसाइट में शामिल है। इसके जरिए हर महीने २.५-२.८ करोड़ टिकट बिक्री होती है। रोजाना इसकी वेबसाइट पर ७ करोड़ लोगीन होते हैं। कंपनी प्रति टिकट १०-३० रुपये फीस वसूलती है। इपो के बाद कंपनी में सरकारी का हिस्सा ८७ फीसदी रह जाएगा। इस आईपीओ के जरिए सरकार ने 6२0 करोड़ रुपये जुटाए हैं। कंपनी मुनाफे का ४० फीसदी हिस्सा डिविडेंड पर खर्च करती है। | hindi |
بہٕ چھس یژھان پانس سۭتۍ پونٛسہٕ انن | kashmiri |
स्वास्थ्य: बढ़ाएं दिमाग़ की ताक़त
बढ़ाएं दिमाग़ की ताक़त
परीक्षाओं की तैयारियों के दिन आ चुके हैं। सीखे हुए पाठों को याद रखना एक बड़ी चुनौती है। मस्तिष्क की क्षमताओं को खींच कर अंतिम सिरे तक ले जाना होता है। यह कोई ऐसा काम नहीं है जिसे करने के लिए नोबेल पुरस्कार प्राप्त वैज्ञानिक का दिमाग़ चाहिए। मन में सामान्य जोड़-घटाव के कई समीकरण हल किए जा सकते हैं। निरंतर अभ्यास से मस्तिष्क की धार इतनी तेज़ हो जाएगी कि हर काम आसान लगने लगेगा। किसी भी भाषा की कविताएँ या शेरो-शायरी याद करना भी इसी तरह की मानसिक कवायद है जिससे मेधा प्रखर होती है।
दरअसल शरीर स्वस्थ रहे तो मस्तिष्क भी सामान्य रूप से काम करने लगता है। संपूर्ण शारीरिक स्वास्थ के लिए योगासन, ध्यान साधना का भी उतना ही योगदान है जितना नियमित कसरतों का। फर्क केवल इतना है कि जिम्नेशियम में माँसपेशियाँ विकसित कर मनचाहा आकार प्राप्त किया जा सकता है लेकिन मस्तिष्क के स्वास्थ पर इसका कम ही असर पड़ता है। अक्सर ऐसा होता है कि परीक्षा हॉल में सारे प्रश्नों के उत्तर आते हुए भी कई विद्यार्थियों का दिमाग़ सुन्ना हो जाता है। उत्तर जानने के बावजूद वे लिख नहीं पाते। यही मस्तिष्क की कमज़ोरी का प्रमाण है। इस अंक में याद रखने के टिप्स इसी को ध्यान में रखकर दिए गए हैं। नियमित रूप से क्रासवर्ड पज़ल्स सॉल्व करने से लेकर पहाड़े रटने जैसी मानसिक कवायद तक सभी एग्ज़ामिनेशन फीयर पर काबू पाने में मददगार साबित हो सकते हैं। शरीर की तरह आपके दिमाग़ को भी बेहतर काम करने के लिए व्यवस्थित रहना ज़रूरी है। इसके लिए कुछ टिप्स दिए जा रहे हैं। इनसे न सिर्फ आपका दिमाग़ त़ेज गति से काम करने लगेगा, बल्कि परीक्षा के लिए किसी भी पाठ को याद रखना आसान हो जाएगा।
आप दिमाग़ी कसरत करने के लिए अपने आपको तैयार करें। दिमाग़ी कसरत शारीरिक कसरत से भिन्ना होती है। हमारे देश में शतरंज ईजाद किया गया तो इसीलिए कि यह दिमाग़ की सबसे कठिन और ज़ोरदार कसरत है। खैर शतरंज तो सभी नहीं खेलते हैं, लेकिन क्रासवर्ड पज़ल्स या कम्प्यूटर पर दिए गए गेम सालिटायर को तो लगभग सभी पसंद करते हैं। आप इनसे शुरुआत कर सकते हैं। आप यदि यह भी नहीं करना चाहते हैं तो आसान तरीका है साधारण स्तर के गुणा-भाग अथवा जोड़-घटाव करना।
हफ्ते में एक बार कोई कविता या जोक याद करने की कोशिश करें। इससे आपका दिमाग़ शेप में रहेगा और इसकी ताकत भी बढ़ेगी। हमेशा कुछ नया करने की सोच रखिए। नए-नए आइडियाज़ को सामने आने दें। इसके लिए एक बच्चे की तरह सोचना ही काफी है। बच्चे सकारात्मक ऊर्जा, विस्मित भाव और उत्सुकता से सोचते हैं। अपने आपको दिवास्वप्न देखने दीजिए। इससे मस्तिष्क तीक्ष्ण होगा और उसकी ताकत भी बढ़ेगी। अपने आपको केवल एक ही व्यक्ति न बनने दें। एक ही व्यक्ति में बहुत सारे व्यक्तित्व पैदा कीजिए। जितने अधिक हो सकें,उतने तरीक़ों से सोचिए।
कई विद्यर्थियों को परीक्षा के बारे में सोचकर ही बेचैनी महसूस होने लगती है। मन में कई विचार घूमने लगते हैं, -"क्या मैं सभी प्रश्नों का उत्तर दे पाउँगा?" "थोड़ा और पढ़ लेता तो अच्छा होता"आदि। ये विचार लगभग हर विद्यार्थी को परेशान करते हैं। थोड़ा-बहुत दबाव बेहतर प्रदर्शन के लिए मददगार होता है। इससे शरीर में एड्रिनलिन हारमोन स्त्रावित होता है जो व्यक्ति को सचेत और फोकस्ड बनाए रखता है। हल्का तनाव या दबाव होना स्वाभाविक है लेकिन ज़्यादा घबराहट परेशानी का सबब बन जाती है। यह व्यक्ति के चारों ओर एक नकारात्मक घेरा बना देती है और फिर वो एकाग्र होकर सोच-समझ नहीं पाता। इसका बुरा प्रभाव पड़ता है प्रदर्शन पर क्योंकि विद्यार्थी न तो प्रश्नों पर अपना ध्यान केंद्रित कर पाता है और न सटीक उत्तर ही दे पाता है। ऐसे कई उपाय हैं जिनसे परीक्षा का भय दूर किया जा सकता है ताकि विद्यार्थी अपना सर्वश्रेष्ठ प्रदर्शन कर सकें -
अपना कोर्स समय रहते पढ़ लें और उसका रिविज़न भी कम से कम एक दिन पहले ही पूरा कर लें। ऐन वक्त तक पढ़ते रहने से तनाव बढ़ता है। चित्त स्थिर रखने और मन शांत करने के अलग-अलग तरीके हो सकते हैं, किसी को संगीत सुनने पर सुकून मिलता है तो किसी को व्यायाम करने से या फिर कुनकुने पानी से स्नान करना भी अच्छा तरीका हो सकता है। अपने लिए रिलेक्स करने का ऐसा ही कोई तरीका चुनें।
परीक्षा के दिन और उससे एक दिन पहले इस तरह के उपाय बहुत फायदेमंद साबित होते हैं। जो कुछ भी आपने पढ़ा है उसे याद रखने में ये सहायक होते हैं और आत्मविश्वास बढ़ताता है। एग्ज़ाम सेंटर का रास्ता पता न होना भी घबराहट का कारण बन सकता है। इस बारे में पहले ही जानकारी जुटा लें और संभव हो तो एक बार खुद वहाँ जाकर देखें। इससे ऐन वक्त की हड़बड़ी से बच जाएँगे। परीक्षा के नियमों को ध्यान से पढ़ लें। परीक्षा से पहले की रात नींद पूरी करें।
"मुझे कुछ नहीं आता"। पढ़ाई नहीं की हो तो ये ख़्याल परेशान कर सकता है लेकिन अच्छे से पढ़ने पर भी ऐसे विचार उत्पन्न होना घबराहट के संकेत हैं। तनाव के कारण विद्यार्थी ध्यान केंद्रित नहीं कर पाते। कई तो प्रश्न भी ठीक से नहीं पढ़ पाते हैं। इससे बचने के लिए ये उपाय कर सकते हैं -
परीक्षा कक्ष में सही समय पर पहुँचें।
कक्ष में पहुँचकर लंबी-गहरी साँसें लें और छोड़ें। घबराहट में अक्सर लोग ठीक से सांस नहीं लेते हैं। गहरी सांस लेते हुए अपनी पीठ एकदम सीधी कर लें।
आपके सामने रखी किसी स्थिर निर्जीव वस्तु (दीवार, तस्वीर, आदि) की ओर देखकर ध्यान केंद्रित करने का प्रयास करें। मन में कोई सकारात्मक बात दोहराएँ जैसे - "मैं ये परीक्षा पास करने वाला हूँ।" १-२ मिनिट तक यही दोहराते रहें और फिर सामान्य रुप से सांस लें। शांति अनुभव करेंगे। प्रश्नों को ध्यान से पढ़ें। यदि परीक्षा के बीच फिर से घबराहट होने लगे तो फिर से एकाग्रता के उपाय दुहराएं। प्रश्नपत्र हल करने की रणनीति तय कर लें। कौन से प्रश्न पहले हल करेंगे आदि और बिना समय बर्बाद किए उत्तर लिखना शुरू कर दें।
याददाश्त के टिप्स...
किसी भी बात को याद रखने के लिए दिमाग़ उस बात का अर्थ मूल्य और औचित्य के आधार पर तय करता है। दिमाग़ की प्राथमिकता भी इसी क्रम में काम करती है। याद रखने की सबसे पहली सीढ़ी है अर्थ जानना, अतः किसी भी बात को याद रखने से उसका अर्थ ज़रूर समझिए। यदि अर्थ ही समझ में नहीं आया है तो रटने का कोई मतलब नहीं है। इसलिए पहले जिस बात या पाठ को याद रखना है, पहले उसका अर्थ समझिए, फिर उसका महत्व और मूल्य समझिए इसके बाद आपके जीवन में उस बात का क्या औचित्य है यह जानिए। किसी बात के प्रति आपका रवैया क्या है,इससे उस बात को याद रखने का सीधा संबंध है। यदि आप किसी बात को याद रखते समय उसके प्रति सकारात्मक रवैया रखेंगे तो वह बात या पाठ आपको पहली बार में ही याद हो जाएगा।
किसी भी नई बात को समझना आपके पहले से अर्जित ज्ञान पर निर्भर है क्योंकि तब आप नई बात को उसकी कसौटी पर रखकर जोड़ते हुए याद रख सलेंगे। आप जितना मूलभूत ज्ञान बढ़ाते जाएंगे,उतना नए ज्ञान को समझना आसान होता जाएगा। यही बात याद रखने पर भी लागू होती है(डॉ. अनिल शर्मा,सेहत,नई दुनिया,जनवरी २०१२ द्वितीयांक)।
प्रस्तुतकर्ता कुमार राधारमण पर सोमवार, जनवरी २३, २०१२
अच्छा लगा पढ़कर, बहुत काम आएगा.
बच्चों के लिये ही नहीं बड़ों के लिये भी उपयोगी
टिप्स है !
बहोत बढिया लेख ।
रेखा जनवरी २३, २०१२
बहुत ही उपयोगी सुझाव हैं ...आभार
अच्छे टिप्स दियें हैं आबाल -वृद्धों के लिए जीवन खुद एक परीक्षा है यहाँ रोज़ नया इम्तिहान होता है नित नया सीखना करना बांटना ज़रूरी .
बहू उपयोगी एवं जानकारी पूर्ण आलेख आभार
अत्यंत उपयोगी पोस्ट.... | hindi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.