text
stringlengths 1
7.94M
| lang
stringclasses 5
values |
---|---|
بادشاہَن رٔٹ پنٕنۍ ہۄنگٕنۍ دوٚچھنِس اَتھس منٛز تہٕ نظر تھٲوٕن قٲلٟنَس پؠٹھ
|
kashmiri
|
The Best Time for your Maui Wedding: Morning or Sunset?
Our clients almost always ask us what is the perfect time of day for their Maui Wedding. There's no right or wrong answer to this question rather there's only your plans for the day and what kind of imagery you want to see from your Maui Wedding Photographer. The late afternoon, usually 1 hour before sunset, is our most popular time for our Maui Wedding celebrations. The light tends to soften, the temperatures cool a bit and the beach goers leave for their hotels and dinner. Of course, there's also that famous Maui sunset which is spectacular most of the time.
We can capture those deep blues and blacks of the lava rocks, ocean and sky without worrying about other visitors scuttling back and forth behind the couple. The sand is still cool and the temps haven't peaked yet although they will get close to 85 by 930AM. We have special morning packages perfect for the couple who wishes to elope but we can also provide our full wedding planning packages for your Morning wedding on Maui. Just call to talk story 1-808-242-1100! Mahalo!!
|
english
|
/**
* Copyright 2016-2018 Dell Inc. or its subsidiaries. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0.txt
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.emc.ecs.nfsclient.rpc;
import java.util.HashMap;
import java.util.Map;
/**
* @author seibed
*
*/
public class ReplyStatus extends RpcStatus {
/**
* Accepted - reply status specified by RFC 1831
* (https://tools.ietf.org/html/rfc1831).
*/
public static final ReplyStatus MSG_ACCEPTED = new ReplyStatus(0);
/**
* Denied - reply status specified by RFC 1831
* (https://tools.ietf.org/html/rfc1831).
*/
public static final ReplyStatus MSG_DENIED = new ReplyStatus(1);
/**
* Preset values.
*/
private static final Map<Integer, ReplyStatus> VALUES = new HashMap<Integer, ReplyStatus>();
static {
addValues(new ReplyStatus[] { MSG_ACCEPTED, MSG_DENIED });
}
/**
* Convenience function to get the instance from the int status value.
*
* @param value
* The int status value.
* @return The instance.
*/
public static ReplyStatus fromValue(int value) {
ReplyStatus status = VALUES.get(value);
if (status == null) {
status = new ReplyStatus(value);
VALUES.put(value, status);
}
return status;
}
/**
* @param values
* Instances to add.
*/
private static void addValues(ReplyStatus[] values) {
for (ReplyStatus value : values) {
VALUES.put(value.getValue(), value);
}
}
/**
* Create the instance from the int status value.
*
* @param value
*/
private ReplyStatus(int value) {
super(value);
}
}
|
code
|
Insurance: An appraisal for your insurance company which values pieces at their replacement value.
Donation: Planning to donate property to a private organization, charity or museum? You may need to submit an appraisal to the IRS if you are claiming a tax deduction of more than $5000.
Fair market: A Probate or Equitable distribution appraisal reflects what an item could sell for in an open market giving a willing buyer. Fair market appraisals are used in estate planning or by probate courts to settle estate distribution.
Appraisals generally start at $250 an hour with a two-hour minimum. Turnaround time will vary and depends on the size and complexity of the appraisal requested. After an initial review of the property, your appraiser will provide you with an approximate time frame and estimate of cost.
If we have done an appraisal for you in the past, we would be more than happy to update your appraisal at a reduced rate.
If you are simply looking to find out what your personal property may be worth at auction, please fill out our consignment form for a free auction valuation.
|
english
|
---
layout: page
weight: 0
title: Zend
seo:
title: Send Email with Zend & SendGrid
description: View instructions on how to easily send email with Zend using SendGrid, by setting up setting up Zen's mail module.
navigation:
show: true
---
You can directly integrate Zend's mail module with SendGrid to use our SMTP servers for outgoing messages.
{% codeblock lang:php %}
<?php
require_once '$HOME/sendgrid/Zend/library/Zend/Mail.php';
require_once '/$HOME/sendgrid/Zend/library/Zend/Mail/Transport/Smtp.php';
$smtpServer = 'smtp.sendgrid.net';
$username =
$password =
$config = array('ssl' => 'tls',
'port' => '587',
'auth' => 'login',
'username' => $username,
'password' => $password);
$transport = new Zend_Mail_Transport_Smtp($smtpServer, $config);
$mail = new Zend_Mail();
$mail->setFrom('sender@example.com', 'Some Sender');
$mail->addTo('email@example.com','Some Recipient');
$mail->setSubject('Test Subject');
$mail->setBodyText('This is the text of the mail using Zend.');
$mail->send($transport);
?>
{% endcodeblock %}
If you prefer a modular installation, then [check out Jurian Sluiman's SlmMail project at GitHub](https://github.com/juriansluiman/SlmMail.git).
|
code
|
var mongoose = require('mongoose');
var PostSchema = new mongoose.Schema({
title: String,
link: String,
upvotes: {type: Number, default: 0},
comments: [{type: mongoose.Schema.Types.ObjectId, ref: 'Comment'}]
});
PostSchema.methods.upvote = function (cb) {
this.upvotes += 1;
this.save(cb);
};
PostSchema.methods.downvote = function (cb) {
this.upvotes -= 1;
this.save(cb);
};
mongoose.model('Post', PostSchema);
|
code
|
require 'compass/import-once/activate'
http_path = "/assets/"
sass_dir = "scss"
css_dir = "css"
images_dir = "images"
javascripts_dir = "js"
fonts_dir = "css/fonts"
relative_assets = true
output_style = output_style = (environment == :production) ? :compressed : :expanded
|
code
|
हरदोई(आशीष द्विवेदी): संगठन महामंत्री और जानें मानें आरएसएस प्रचारक संजय जोशी ने प्रधानमंत्री की तारीफ की है। इसके साथ ही राम मंदिर निर्माण को लेकर भाजपा सांसद विनय कटियार के बयान से किनारा करते हुए कोर्ट के न्याय पर भरोसा जताया है। दरअसल संजय जोशी रविवार हिन्दू नव वर्ष को लेकर आरएसएस द्वारा आयोजित कार्यक्रम में शामिल होने आए। हालांकि राजनीतिक हलकों में इसके कई सियासी मायने निकाले जा रहे हैं।
पीएम मोदी की तारीफ में बांधें पुल
हरदोई के भारतीय नव संवत्सर २०७५ हिन्दू नव वर्ष पर गांधी मैदान में आयोजित कार्यक्रम में शरीक होने आए मुख्य अतिथि संजय जोशी ने भरे मन से प्रधानमंत्री की तारीफ की। उन्होंने कहा कि हम आज आजादी के ७ दशक की यात्रा के बाद अपने हिंदुस्तान का लेखा जोखा देखें तो इस देश ने तरक्की के अच्छे सोपान हासिल किए हैं। आज स्वतंत्रता के बाद भी हिंदुस्तान का दबदबा है आज अपने प्रधानमंत्री नरेंद्र मोदी विदेशों में जाते हैं तो गर्मजोशी से उनका स्वागत होता है। उन्होंने बताया कि कुछ दिन पहले अंतरराष्ट्रीय सोलर एनर्जी पर एक गठबंधन बना तो १६० देशों के प्रतिनिधि हिंदुस्तान आए।
महात्मा गांधी की मूर्ति बढ़ती हुई शक्ति का परिचायक
संजय जोशी ने कहा कि आपको याद होगा कि ३ साल पहले २०१५ में जनवरी में इंग्लैंड की पार्लियामेंट के सामने महात्मा गांधी की पूर्ण आकृति प्रतिमा का अनावरण हुआ। उस प्रतिमा का अनावरण करने अपने अर्थमंत्री अरुण जेटली और अमिताभ बच्चन गए थे। यह विचार करने वाला है कि जिस महात्मा गांधी ने अंग्रेजो को हटाने के लिए अपनी सारी जिंदगी लगा दी। उस इंग्लैंड ने बदली हुई परिस्थिति में उस महात्मा गांधी को पार्लियामेंट में सामने स्थान देकर उनके पूर्ण आकृति पुतले का अनावरण किया। यानी हिंदुस्तान की बढ़ती हुई ताकत बढ़ती हुई शक्ति का यह परिचायक है।
बीजेपी भव्य राम मंदिर के लिए संकल्पकृत
भाजपा सांसद विनय कटियार के राम मंदिर निर्माण को लेकर हिंदुओं को तैयार रहना चाहिए के इस बयान के सवाल पर संजय जोशी ने किनारा करते हुए कहा कि कोर्ट में केस चल रहा है कोर्ट का सकारात्मक निर्णय आएगा। करोड़ों हिंदुओं की का श्रद्धा विषय है भारतीय जनता पार्टी भव्य राम मंदिर के लिए संकल्पकृत है।
|
hindi
|
Carolers sang on the Newburgh waterfront to the ferry riders as they disembarked on Wednesday, December 16, 2015.
Carolers sang on the Newburgh waterfront to the ferry riders as they disembarked on Wednesday, December 16, 2015. Hudson Valley Press/CHUCK STEWART, JR.
|
english
|
व्यायाम अर्थात् कसरत ! अंग्रेजी में इसके लिए 'एक्सरसाइज' शब्द का प्रयोग किया जाता है। इसके महत्व से हर आदमी परिचित है। इसके लाभ क्या हैं, सामान्य रूप से इस बात को सभी जानते हैं। फिर भी यह बात दुख के साथ कहनी और स्वीकार करनी पड़ती है कि आज इस महत्त्वपूर्ण कार्य की ओर आम तौर पर ध्यान नहीं दिया जाता, या बहुत कम दिया जाता है। यही कारण है कि आज का जीवन व्यक्ति और समाज दोनों स्तरों पर अनेक प्रकार के भयानक रोगों का, महामारियों और अव्यवस्थाओं का शिकार होता जा रहा है।
एक कहावत है-स्वस्थ शरीर में ही स्वस्थ मन तथा आत्मा का निवास हुआ करता है। यह भी कहा जाता है कि बुद्धिमत्तापूर्ण कार्य और सफलता के लिए परिश्रम भी स्वस्थ शरीर द्वारा ही संभव हुआ करते हैं। इस प्रकार स्वस्थ मन, स्वस्थ बुद्धि, स्वस्थ आत्मा आदि के लिए शरीर को स्वस्थ रखना बहुत ज़रूरी है। जीवन जीने के लिए, जीवन की हर प्रकार को छोटी-बड़ी आवश्यकता पूरी करने के लिए मनुष्य को निरन्तर परिश्रम करना पड़ता है। परिश्रम भी स्वस्थ व्यक्ति ही कर सकता है, निरन्तर अस्वस्थ और रोगी रहने वाला नहीं। इन सभी बातों से स्पष्ट पता चल जाता है कि नियमित व्यायाम करना क्यों आवश्यक है। उसका महत्त्व और लाभ क्या है? स्वस्थ मन-मस्तिष्क वाला व्यक्ति ही जीवन में हर बात पर ठीक से विचार कर सकता है। उसके हानि-लाभ से परिचित हो सकता है। अपने जीवन को उन्नत तथा विकसित बनाने के लिए कोशिश कर सकता है। ऐसा करके वह बाहरी जीवन के सभी सुख तो पा ही सकता है, जिसे आत्मिक सुख और आत्मा का आनन्द कहा जाता है, उसका अधिकारी भी बन जाता है। स्पष्ट है कि इस प्रकार के सभी अधिकार पाने के लिए शरीर का स्वस्थ, सुन्दर और निरोगी होना परम आवश्यक है।
शरीर की स्वस्थता और सुन्दरता नियमपूर्वक व्यायाम करके ही कायम रखी और प्राप्त की जा सकती है। मन, बुद्धि और आत्मा भी तभी स्वस्थ-सुन्दर होंगे जब शरीर स्वस्थ-सुन्दर होगा। अत: व्यायाम का लाभ और महत्त्व स्पष्ट है। आलस्य को मनुष्य का सबसे बड़ा शत्रु माना गया है। वह आदमी को कामचोर तो बना ही देता है, धीरे-धीरे उसके तन-मन को जर्जर और रोगी भी बना दिया करता है। मनुष्य की सारी शक्तियाँ समाप्त कर उसे कहीं का नहीं रहने देता। इस आलस्य रूपी महाशत्रु से छुटकारा पाने के लिए आवश्यक है कि प्रतिदिन व्यायाम करने का नियम बनाएँ । व्यायाम करने का यह एक नियम ही जीवन की धारा को एकदम बदल सकता है। यह आदमी को स्वस्थ-सुन्दर तो बनायेगा ही उसे परिश्रमपूर्वक अपना काम करने की प्रेरणा भी देगा। हर प्रकार से चुस्त और दुरुस्त राखेगा। हमेशा चुस्त और दुरुस्त रहने वाला व्यक्ति जीवन में सहज ही सब-कुछ पाने का अधिकारी बन जाया करता है। इसके लिए कठिन और असंभव कुछ भी नहीं रह जाता। जीवन की सारी खुशियाँ, सारे सुख उसके लिए हाथ पर रखे आमले के समान सहज सुलभ हो जाया करते हैं!
स्वस्थ्य-सुन्दर, चुस्त-दुरुस्त, क्रियाशील और गतिशील रहने के लिए व्यायाम आवश्यक है, ऊपर के विवेचन से यह बात स्पष्ट हो जाती है। व्यायाम के कई रूप और आकार हैं। अर्थात स्वस्थ रहने के लिए हम अनेक प्रकार के व्यायामों में से अपनी आवश्यकता और सुविधा के अनुसार किसी भी एक रूप का चुनाव कर सकते हैं। प्रातकाल उठकर दो-चार किलोमीटर तक खुली हवा और वातावरण में भ्रमण करना सबसे सरल, सुविधाजनक, पर सबसे बढ़कर लाभदायक व्यायाम है। इसी प्रकार सुबह के वातावरण में दौड़ लगाना भी व्यायाम का एक अच्छा और सस्ता रूप माना जाता है। बैठकें लगाना, डण्ड पेलना, कुश्ती लड़ना, कबड़ी खेलना आदि सस्ते, व्यक्तिगत और सामूहिक देशी ढंग के व्यायाम हैं। कोई भी व्यक्ति अकेला या कुछ के साथ मिलकर सरलता से इनका अभ्यास कर आदी बन सकता है। यदि किसी व्यक्ति की रुचि ललित कलाओं में है, तो वह नृत्य के अभ्यास को भी एक अच्छा और उन्नत व्यायाम मानकर चल सकता है। ध्यान रहे, व्यायाम में शरीर के भीतर-बाहर के सभी अंगों का हिलता-डुलना, साँसों का उतार-चढाव आदि आवश्यक है। अतः कैरम या ताश खेलना जैसे खेलों को व्यायाम करना नहीं कहा जा सकता। हाँ, योगासन करना बड़ा ही अच्छा और उन्नत किस्म का व्यायाम माना जाता है। योगाभ्यास तन, मन और आत्मा सभी को शुद्ध करके आत्मा को भी उन्नत और बलवान बनाता है।
आज तरह-तरह के खेल खेले जाते हैं। हॉकी, फुटबॉल, वालीबॉल, बास्किट बॉल, टेनिस, क्रिकेट आदि सभी खेल सामूहिक स्तर पर खेले जाते हैं। इनसे शरीर के प्रायः सभी अंगो का व्यायाम तो होता ही है, मिल-जुलकर रहने और काम करने की प्रवृत्ति को भी बल मिलता है। सामूहिकता और सामाजिकता की भावनाओं का अभ्यास भी होता है, इनके विकास का भी अच्छा अवसर प्राप्त हो जाता है। इस प्रकार की बातों को हम व्यायाम से प्राप्त होने वाले अतिरिक्त लाभ कह सकते हैं। यह भी कहा जा सकता है कि ये सब बात सामूहिक या फिर सामाजिक स्वास्थ्य की रक्षा के लिए आवश्यक हुआ करती हैं, सो खेलों, का व्यायाम शरीर को स्वस्थ रखने के साथ-साथ सारे जीवन और समाज को स्वस्थ-सुन्दर रखने में महत्त्वपूर्ण योगदान कर सकता है।
पहले छोटे-बड़े, प्रायः सभी आयु वर्ग के लोग अपनी-अपनी ज़रूरत और सुविधा के अनुसार किसी-न-किसी प्रकार का व्यायाम अवश्य किया करते थे किन्तु आज का जीवन कुछ इस प्रकार का हो गया है कि वह अभ्यास छूटता जा रहा है, लगभग छूट ही चुका है। एक तो लोगों में पहले जैसा उत्साह ही नहीं रह गया, दूसरे पहले के समान सुविधाएँ भी नहीं मिल पातीं। अभाव और मारामारी दूसरों को गिराकर भी खुद आगे बढ़ जाने की अच्छी दौड़ ने आज जीवन को इतना अस्त-व्यस्त बना दिया है कि अपने स्वास्थ्य तक की और उचित ध्यान दे पाने का समय हमारे पास नहीं रह गया । उस पर हम लोग प्रदूषित वातावरण में रहने को विवश हैं। परिणाम हमारे सामने है। आज तरह-तरह की नयी बीमारियों ने हमारे तन-मन को ग्रस्त कर लिया है। खाना तक ठीक पचा नहीं पाते। बीमारियाँ बढ़ने के साथ-साथ डाक्टरों, अस्पतालों की भरमार हो गयी है। ज़रा-ज़रा-सी बात के लिए डॉक्टरों, डॉक्टरी सलाह और दवाइयों के मुहताज होकर रह जाना वास्तव में सामूहिक अस्वस्थता का लक्षण ही कहा जा सकता है।
स्वाभाविक प्रश्न उठता है कि इस सामूहिक अस्वस्थता से उबरने का आखिर उपाय क्या है ? उत्तर और उपाय एक ही है-व्यायाम ! किसी भी प्रकार के वैयक्तिक या सामूहिक व्यायाम करने के आदी बनकर ही इस प्रकार की विषम परिस्थितियों से छटकारा प्राप्त किया। जा सकता है। नियमपूर्वक व्यायाम करना, साफ-सुथरे वातावरण में रहना वह रामबाण औषधि है, कि जिससे हर व्यक्ति और पूरे समाज का कल्याण हो सकता है। यदि व्यायाम और उससे प्राप्त लाभ की तरफ ध्यान न दिया गया, तो व्यक्ति और समाज सभी अस्वस्थ हो जायेंगे, दुर्बल हो जायेंगे और दुर्बल को जीने का अधिकार नहीं हुआ करता. यह प्रकृति का नियम है। सो अपने और समाज के विनाश से बचने के लिए हमें व्यायाम के लाभदायक मार्ग पर आज से ही चलना शुरू कर देना चाहिए।
|
hindi
|
Here we have a rather intricate set of interviews I’ve conducted on a few students on my universities campus. I talk to them about what kind of music they listened to back in 2009 where most of us were starting out our “Emo Phase”.
So if you’re interested, take a gander and listen to our phase that isn’t a phase.
Previous Post What is the Internet To me?
Here you’ll be finding a minor if not small collection of stuff that I’ll be talking about. Be it music, video games, photography, toys, and cosplays. So Sit back relax and read up my friends.
|
english
|
پی اِٟ نِسبَتُک حِساب لگاونُک بیٚیہِ جانٕکٲری باپتھ وٕچھِوۍ پِی اِٟ نِسبَتُک حِساب لگانُک طٕرٟقہٕ
|
kashmiri
|
\begin{document}
\title{The Hamilton-Waterloo problem for Hamilton cycles and $C_{4k}
\begin{minipage}{100mm}
{\small\bf Abstract}
\hbox{} \hskip 6mm In this paper we give a complete solution to the Hamilton-Waterloo
problem for the case of Hamilton cycles and $C_{4k}$-factors for all
positive integers $k$. \\
{Keywords: 2-factorization; Hamilton-Waterloo problem; Hamilton cycle; cycle decompositions}
\end{minipage}\\
\topskip 1cm \textheight 8in
\section{\large\bf{Introduction}}
The Hamilton-Waterloo problem is a generalization of the
well known Oberwolfach problem, which asks for a 2-factorization of
the complete graph $K_n$ in which $r$ of its 2-factors are
isomorphic to a given 2-factor $R$ and s of its 2-factors are
isomorphic to a given 2-factor $S$ with $2(r+s)=n-1$. The most
interesting case of the Hamilton-Waterloo problem is that $R$ consists of cycles of length $m$ and
$S$ consists of cycles of length $k$, such a 2-factorization of
$K_n$ is called uniform and denoted by $HW(n;r,s;m,k)$. The
corresponding Hamilton-Waterloo problem is the problem for the
existence of an $HW(n;r,s;m,k)$.
There exists no 2-factorization of $K_n$ when $n$ is even since the degree of each vertex is odd. In this case, we consider the 2-factorizations of $K_n-I_n$(where $I_n$ is a 1-factor of $K_n$) instead. The corresponding 2-factorization is also denoted by $HW(n;r,s;m,k)$. Obviously $2(r+s)=n-2$.
It is easy to see that the following conditions are necessary for
the existence of an $HW(n;r,s;m,k)$:
{\bf Lemma 1.1.}
If there exists an $HW(n;r,s;m,k)$, then
\ \ $n\equiv 0\pmod{m}$ when $s=0$;
\ \ $n\equiv 0\pmod{k}$ when $r=0$;
\ \ $n\equiv 0\pmod{m}$ and $n\equiv 0\pmod{k}$ when $r\neq 0$ and
$s\neq 0$;
The Hamilton-Waterloo problem attracts much attention and progress
has been made by several authors. Adams, Billington, Bryant and
El-Zanati \cite{1} deal with the case $(m,k)\in
\{(3,5),(3,15),(5,15)\}$. Danziger, Quattrocchi and Stevens\cite{3}
give an almost complete solution for the case $(m,k)=(3,4)$, which
is stated below:
{\bf Theorem 1.2.} \cite{3} An $HW(n;r,s;3,4)$ exists if and only
if\\ $n\equiv 0 \pmod {12}$ and $(n,s)\neq (12,0)$ with the
following possible exceptions:
\ \ $n=24$ and $s=2,4,6$;
\ \ $n=48$ and $s=6,8,10,14,16,18$.
The case $(m,k)=(n,3)$, i.e. Hamilton cycles and triangle-factors,
is studied by Horak, Nedela and Rosa \cite{8}, Dinitz and
Ling \cite{4,5} and the following partial result
obtained:
{\bf Theorem 1.3.}
\cite{4,5,8}
(a) If $n\equiv 3 \pmod {18}$, then an $HW(n;r,s;n,3)$ exists except
possibly when $n=93,111,129,183,201$ and $r=1$;
(b) If $n\equiv 9 \pmod
{18}$, then an $HW(n;r,s;n,3)$ exists except $n=9$ and $r=1$, except possibly when $n=153,207$ and $r=1$;
(c) If $n\equiv 15 \pmod {18}$ and $r\in \{1,\frac{(n+3)}{6},\frac{(n+3)}{6}+2,\frac{(n+3)}{6}+3,\ldots,\frac{(n-1)}{2}\}$, then an $HW(n;r,s;n,3)$ exists except possibly when $n=123,141,159,177,\\213,249$ and
$r=1$.
For $n\equiv 0\pmod{6}$, the problem for the existence of an
$HW(n;r,\\s;n,3)$ is still open.
The cases $(m,k)\in \{(t,2t)|t>4\}$ and $(m,k)\in\{(4,2t)|t>3\}$
have been completely solved by Fu and Huang \cite{6}.
{\bf Theorem 1.4.}\cite{6}
(a) Suppose $t\geq 4$, an $HW(n;r,s;t,2t)$ exists if and only if $n\equiv
0 \pmod {2t}$.
(b) For an integer $t\geq 3$, an $HW(n;r,s;4,2t)$ exists if and only if $n\equiv
0 \pmod {4}$ and $n\equiv 0\pmod {2t}$.
For $r=0$ or $s=0$, the Hamilton-Waterloo problem is in fact the
problem for the existence of resolvable cycle decompositions of the
complete graph, which has been completely solved by Govzdjak
\cite{7}.
{\bf Theorem 1.5.}\cite{7} There exists a resolvable $m$-cycle
decomposition of $K_n$(or $K_n-I$ when n is even) if and only if
$n\equiv0\pmod{m}$, \\$(n,m)\neq(6,3)$ and $(n,m)\neq(12,3)$.
The purpose of this paper is to give a complete solution to the
Hamilton-Waterloo problem for the case of Hamilton cycles and
$C_{4k}$-factors which is stated in the following theorem.
{\bf Theorem 1.6.} For given positive integer $k$, an
$HW(n;r,s;n,4k)$ exists if and only if $r+s=[\frac{n-1}{2}]$ and
$n\equiv 0\pmod{4k}$ if $s>0$ or $n\geq 3$ if $s=0$.
\section{\large\bf Preliminaries}
In this section, we provide some basic constructions.
For convenience, we introduce the following notations first. A
$C_m$-factor of $K_n$ is a spanning subgraph of $K_n$ in which each
component is a cycle of length $m$. Let $r+s= {{[(n - 1)}
\mathord{\left/
{\vphantom {{[(n - 1)} 2}} \right.
\kern-\nulldelimiterspace} 2}]
$ and
$$HW^*(n;m,k)=\{r|an\ HW(n;r,s;m,k)\ exists\}.$$
We use HC to
represent Hamilton cycle for short.
By Lemma 1.1, the necessary condition for the existence of
$HW(n;\\r,s;n,4k)$ with $s>0$ is $n\equiv 0\pmod{4k}$, we assume
$n=4kt$ and the vertex set of $K_n$ is $Z_{2t}\times Z_{2k}$. We
write $V_i=\{i\}\times Z_{2k}=\{i_0,i_1,\ldots,i_{2k-1}\}$ for $i\in
Z_{2t}$. Let $K_{V_i,V_j}$ be the complete bipartite graph define on two partite sets $V_i$ and $V_j$, and $K_{V_i}$ be the complete graph of
order $2k$ define on the vertex set $V_i$. Obviously,
\[E(K_{4kt})=\bigcup\limits_{i=0}^{2t-1}{E(K_{V_i})}\cup
\bigcup\limits_{i\neq j}^{}E(K_{V_i,V_j}).\]
Further for $d\in Z_{2k}$, we define sets of edges $(i,j)_d=\{(i_lj_{l+d})|l\in Z_{2k}\}$ for $i,j\in Z_{2t}$. Clearly, $(i,j)_d$ is a perfect matching in $K_{V_i,V_j}$. In fact, $$E(K_{V_i,V_j})=\bigcup\limits_{d=0}^{2k-1}{(i,j)_d}.$$
The following lemmas are useful in our constructions.
{\bf Lemma 2.1.}
\cite{6} Let $I_{2n}=\{(v_0v_n)\}\cup\{(v_iv_{2n-i})|1\leq
i\leq n-1\}$. Then $K_{2n}-I_{2n}$ can be
decomposed into $n-1$ HCs, Each HC can be decomposed into two 1-factors. Moreover, by reordering the vertices of $K_{2n}$ if necessary, we may assume one of the HCs is $(v_0,v_1,\ldots,
v_{2n-1})$.
The following lemma is a generalization of Lemma 1 in \cite{8}.
{\bf Lemma 2.2.} Let $\pi$ be a permutation of $Z_{2t}$,
$d_0,d_1,\ldots,d_{2t-1}$ be nonnegative integers. Then the set of
edges \[(\pi(0),\pi(1))_{d_0}\cup(\pi(1),\pi(2))_{d_1}\cup\cdots
\cup (\pi(2t-1),\pi(0))_{d_{2t-1}}\] forms an HC of $K_n$ if
$d_0+d_1+\cdots +d_{2t-1}$ and $2k$ are relatively prime.
{\bf Proof.}
Set $d=d_0+d_1+\cdots +d_{2t-1}$, then arrange the edges as
\[H=(\pi(0)_0,\pi(1)_{d_0},\pi(2)_{d_0+d_1},\cdots,
\pi(0)_{d},\pi(1)_{d+d_0},\cdots,\pi(2t-1)_{2kd-d_{2t-1}}).\]
Since $(d,2k)=1$, the vertices
\[\pi(i)_{d_0+d_1+\cdots+d_{i-1}},\pi(i)_{d+d_0+d_1+\cdots+d_{i-1}},\ldots,\pi(i)_{(2k-1)d+d_0+d_1+\cdots+d_{i-1}}\] are mutually distinct for $i\in Z_{2t}$. Thus all vertices in $H$ are mutually distinct, so $H$ is an HC.
$\Box$
{\bf Lemma 2.3.}
Let $d_1,d_2$ be nonnegative integers. If $d_1-d_2$ and $2k$ are
relatively prime, then the set of edges $(i,j)_{d_1}\cup (i,j)_{d_2}$
forms a cycle of length $4k$ on the vertex set $V_i\cup V_j$.
{\bf Proof.}
It's a direct consequence of Lemma 2.2. Arranging the edges as a cycle
$(i_0,j_{d_1},i_{d_1-d_2},j_{2d_1-d_2},\cdots,j_{2kd_1-(2k-1)d_2})$
completes the proof.$\Box$
\section{\large\bf{Proof of the main theorem}}
With the above preparations, now we are ready to prove our main theorem.
Let $\widetilde{G}$ be a complete graph defined on $\{V_0,V_1,\ldots,V_{2t-1}\}$. By Lemma 2.1, $\widetilde{G}$ can be decomposed into $2t-1$
1-factors, denoted by $\widetilde{F}_1,\widetilde{F}_2,\ldots,\widetilde{F}_{2t-1}$, and $\widetilde{F}_{2i-1}\cup
\widetilde{F}_{2i}$ forms an HC for $i=1,2,\ldots,t-1$. By reordering the
vertices if necessary, we may assume
\[\widetilde{F}_1=\{V_0V_1,V_2,V_3,\ldots,V_{2t-2}V_{2t-1}\},\]
\[\widetilde{F}_2=\{V_1V_2,V_3V_4,\ldots,V_{2t-1}V_0\},\]
\[\widetilde{F}_{2t-1}=\{V_0V_t\}\cup\{V_iV_{2t-i}|i=1,2,\ldots,t-1\}.\]
Let
\[F_x=\bigcup\limits_{{V_i}{V_j} \in E({{\widetilde F}_x})} {E({K_{{V_i},{V_j}}})}\ \ for\ x\in Z_{2t}\backslash\{0\}\]
and
\[H_l=(0,1)_l\cup (1,2)_{2k-l}\cup (2,3)_l\cup \cdots\cup (2t-1,0)_{2k-l} \ \ for \ l\in Z_{2k}.\]
Then $F_1\cup F_2=H_0\cup
H_1\cup\cdots\cup H_{2k-1}$.
{\bf Lemma 3.1.}
$F_{2i-1}\cup F_{2i}(i=0,1,\ldots,k-1)$ can be decomposed into
$r_i\in \{0,2,\ldots,2k\}$ HCs and $2k-r_i$ $C_{4k}$-factors of
$K_{n}$.
{\bf Proof.}
We only give the proof for the case $i=1$, i.e. $F_1\cup F_2$, the
remaining cases are similar.
For $l=0,1,\ldots,k-1$, $H_{2l}\cup H_{2l+1}$ can be decomposed into
two edge sets:
\[\bigcup\limits_{j = 0}^{t - 1} {( {{(2j,2j + 1)}_{2l}}\bigcup {{{(2j,2j + 1)}_{2l + 1}}} ) ,} \]
\[\bigcup\limits_{j = 0}^{t - 1} {( {{(2j + 1,2j + 2)}_{2k - 2l}}\bigcup {{{(2j + 1,2j + 2)}_{2k - 2l - 1}}} ) }, \]
by Lemma 2.3, each forms a $C_{4k}$-factor of $K_n$.
Similarly, $H_{2l}\cup H_{2l+1}$ can be decomposed into another two
edge sets:
\[(H_{2l}-(2t-1,0)_{2k-2l})\cup (2t-1,0)_{2k-2l-1},\]
\[(H_{2l+1}-(2t-1,0)_{2k-2l-1})\cup (2t-1,0)_{2k-2l},\]
by Lemma 2.2,
each forms an HC of $K_{n}$.
Finally, by decomposing $H_{2l}\cup H_{2l+1}$
into two HCs when $l\in\{0,1,\ldots,\frac{r_i}{2}-1\}$ or into two $C_{4k}$-factors when $l\in\{\frac{r_i}{2},\frac{r_i}{2}+1,\ldots,k-1\}$, we have the proof.$\Box$
{\bf Lemma 3.2.}
For each $i\in Z_{2t}\backslash\{0\}$, $F_{i}\cup(\bigcup\limits_{i\in Z_{2t}}^{}{K_{V_i}})$ can be decomposed into $2k-1$ $C_{4k}$-factors and a 1-factor of $K_{n}$.
{\bf Proof.}
Noticing that
$F_{i}\cup(\bigcup\limits_{i\in Z_{2t}}^{}{K_{V_i}})=tK_{4k}$ and these
complete graphs of order $4k$ are edge-disjoint. By Lemma 2.1,
each can be decomposed into $2k-1$ HCs and one 1-factor of $K_{4k}$.
Hence, these HCs and 1-factors form $2k-1$ $C_{4k}$-factors and a
1-factor of $K_{n}$. This concludes the proof.
$\Box$
For convenience in presentation, we use ${\rm X}$ to denote $\bigcup\limits_{i\in Z_{2t}}{K_{V_i}}$ in what follows.
{\bf Proposition 3.3.}
$\{0,2,4,\ldots,\frac{n}{2}-2k\}\subseteq
HW^*(n;n,4k)$ for all positive integers $n\equiv 0 \pmod{4k}$.
{\bf Proof.}
Since $K_{n}=F_1\cup F_2\cup \cdots \cup
F_{2t-1}\cup {\rm X}$, applying Lemma
3.2 to $F_{2t-1}\cup{\rm X}$ and
Lemma 3.1 to $F_{2i}\cup F_{2i-1}(1\leq i\leq t-1)$ completes the
proof.
$\Box$
{\bf Proposition 3.4.}
$\{1,3,5,\ldots,\frac{n}{2}-4k+1\}\subseteq HW^*(n;n,4k)$ for all
positive integers $n\equiv 0 \pmod{4k}$.
{\bf Proof.}
First, by Lemma 3.2, we decompose
$F_{2}\cup{\rm X}$ into $2k-1$
$C_{4k}$-factors and a 1-factor. Without loss of generality,
assume the 1-factor is $I_n^{'}=(1,2)_0\cup (3,4)_0\cup \cdots\cup
(2t-1,0)_0$.
Since $E({F_1}) = \bigcup\limits_{i = 0}^{2k - 1}
{({{(0,1)}_i}\bigcup {{{(2,3)}_i}} } \cdots {(2t - 2,2t - 1)_i})$,
we decompose $E({F_1})\cup I_n^{'}$ into $k-1$ $C_{4k}$-factors, an HC
and a 1-factor:
\[C_i=((0,1)_{2i-1}\cup(0,1)_{2i})\cup((2,3)_{2i-1}\cup(2,3)_{2i})\cup\cdots\cup ((2t-2,2t-1)_{2i-1}\cup\]
\[(2t-2,2t-1)_{2i}),\ \
i=1,2,\ldots,k-1,\]
\[HC_1=(0,1)_{2k-1}\cup(1,2)_0\cup(2,3)_0\cup\cdots\cup(2t-2,2t-1)_0,\]
\[I_n=(0,1)_0\cup (2,3)_{2k-1}\cup (4,5)_{2k-1}\cdots\cup (2t-2,2t-1)_{2k-1}.\]
It is straightforward to verify that $C_i$ is a $C_{4k}$-factor,
$HC_1$ is an HC, $I_n$ is a 1-factor and they are edge-disjoint.
Finally, applying Lemma 3.1 to $F_{2i-1}\cup F_{2i}(2\leq i\leq
t-1)$ gives $\{1,3,5,\ldots,\frac{n}{2}-4k+1\}\subseteq
HW^*(n;n,4k)$.
$\Box$
{\bf Lemma 3.5.}
If $r_1\in\{2k,2k+1,2k+2,\ldots,4k-1\}$, then $F_{1}\cup F_{2}\cup
F_{2t-1}\cup{\rm X}$ can be
decomposed into $r_1$ HCs, $4k-1-r_1$ $C_{4k}$-factors and a
1-factor of $K_{n}$.
{\bf Proof.}
It is well known that every complete graph with even order can be
decomposed into Hamilton paths\cite{2}. Noticing that
\[F_{2t-1}\cup{\rm X}=\{K_{V_0\cup
V_t}\}\cup \{K_{V_i\cup V_{2t-i}}|i=1,2,\ldots,t-1\}=tK_{4k}\] and these
complete graphs of order $4k$ have no common vertex. Let
$P_{i,j}[u\ldots v]$ be the Hamilton path of $K_{V_{i}\cup V_{j}}$
with $u$ and $v$ as its end vertices. We may decompose
$F_{2t-1}\cup{\rm X}$ into
$\{P_0,P_1,\ldots,P_{2k-1}\}$ where
\[P_j=\{P_{0,t}[0_j,\ldots,t_j]\}\cup \{P_{i,2t-i}[i_j,\ldots,(2t-i)_j]|i=1,2,\ldots,t-1\}.\]
For each $j$, connecting the Hamilton paths of $P_j$ with $t$ edges $(0_j1_j),\\ (2_j3_j),\ldots, ((2t-2)_j(2t-1)_j)\in (0,1)_0\cup(2,3)_0\cup\cdots\cup(2t-2,2t-1)_0\subseteq H_0$ which gives an $HC$. Then we have $2k$ Hamilton cycles $HC_j$, $j\in Z_{2k}$, when $t$ is odd,
\[\begin{array}{*{20}{c}}
{H{C_j} = }
& {({0_j},{1_j},{P_{1,2t - 1}}[{1_j}, \ldots ,{{(2t - 1)}_j}],{{(2t - 1)}_j},{{(2t - 2)}_j},}
\\
{}
& {{P_{2t - 2,2}}[{{(2t - 2)}_j}, \ldots ,{2_j}], \ldots ,{{(t - 1)}_j},{t_j},{P_{t,0}}[{t_j}, \ldots ,{0_j}]);}
\\
\end{array}\]
when $t$ is even,
\[\begin{array}{*{20}{c}}
{H{C_j} = }
& {({0_j},{1_j},{P_{1,2t - 1}}[{1_j}, \ldots ,{{(2t - 1)}_j}],{{(2t - 1)}_j},{{(2t - 2)}_j},}
\\
{}
& {{P_{2t - 2,2}}[{{(2t - 2)}_j}, \ldots ,{2_j}], \ldots ,{{(t + 1)}_j},{t_j},{P_{t,0}}[{t_j}, \ldots ,{0_j}]).}
\\
\end{array}\]
Then we can decompose $H_1\cup
(H_0-(0,1)_0\cup(2,3)_0\cup\cdots\cup(2t-2,2t-1)_0)$ into an HC and
a 1-factor, or a $C_{4k}$-factor and a 1-factor. In the first
case, let
\[HC_{2k}=H_1\cup(2t-1,0)_{0}-(2t-1,0)_{2k-1},\]
\[I_n=(1,2)_0\cup (3,4)_0\cup\cdots\cup(2t-3,2t-2)_0\cup(2t-1,0)_{2k-1}.\]By Lemma
2.2, $HC_{2k}$ forms an HC. $I_n$ is a 1-factor. In the second case,
let
\[C=\bigcup\limits_{j = 0}^{t - 1} {\{ {{(2j + 1,2j + 2)}_0}\bigcup {{{(2j + 1,2j + 2)}_{2k - 1}}} \} },\]
\[I_n^{'}=(0,1)_1\cup (2,3)_1\cup\cdots\cup(2t-2,2t-1)_{1}.\]
By Lemma 2.3, $C$ is a $C_{4k}$-factor and $I_n^{'}$ is a 1-factor.
Finally, in the same way as Lemma 3.1, for each $r_1\in\{2k,2k+2,2k+4,\ldots,4k-2\}$, we decompose each $H_{2l}\cup
H_{2l+1}$ into two HCs for $l\in\{1,2,\ldots,\frac{r_1}{2}\}$ or two $C_{4k}$-factors for $l\in\{\frac{r_1}{2}+1,\frac{r_1}{2}+2,\ldots,k-1\}$. Then we have the proof.
$\Box$
{\bf Proposition 3.6.}
$\{2k,2k+1,2k+2,\ldots,\frac{n-2}{2}\}\subseteq HW^*(n;n,4k)$ for
all positive integers $n\equiv 0 \pmod{4k}$.
{\bf Proof.}
Let $r=p\cdot 2k+q,$ where $0\leq q<2k$. If $2k\leq r\leq 2kt-2k$ and $q$ is even, by Lemma 3.5, we may decompose $F_{1}\cup F_{2}\cup F_{2t-1}\cup{\rm X}$ into $2k$ HCs, $2k-1$ $C_{4k}$-factors and a 1-factor. By Lemma 3.1, we may decompose $F_{2i-1}\cup F_{2i}$ into $2k$ HCs for each $2\leq i\leq p$, $F_{2p+1}\cup F_{2p+2}$ into $q$ HCs and $2k-q$ $C_{4k}$-factors, and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+2\leq j\leq t-1$. Then we have \[\{2k,2k+2,\ldots,2kt-2k\}\subseteq HW^*(n;n,4k).\]
If $2k\leq r\leq 2kt-2k$ and $q$ is odd, by Lemma 3.5, we may decompose $F_{1}\cup F_{2}\cup
F_{2t-1}\cup{\rm X}$ into $2k+1$ HCs, $2k-2$ $C_{4k}$-factors and a 1-factor. By Lemma 3.1, we may decompose $F_{2i-1}\cup F_{2i}$ into $2k$ HCs for each $2\leq i\leq p$, $F_{2p+1}\cup F_{2p+2}$ into $q-1$ HCs and $2k-q+1$ $C_{4k}$-factors, and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+2\leq j\leq t-1$. Then we have \[\{2k+1,2k+3,\ldots,2kt-2k-1\}\in HW^*(n;n,4k).\]
If $2kt-2k<r\leq \frac{n-2}{2}$ and $q$ is even, by Lemma 3.5, we may decompose $F_{1}\cup F_{2}\cup F_{2t-1}\cup{\rm X}$ into $4k-2$ HCs, a $C_{4k}$-factor and a 1-factor. When $q+2<2k$, by Lemma 3.1, we may decompose $F_{2i-1}\cup F_{2i}$ into $2k$ HCs for each $2\leq i\leq p-1$, $F_{2p-1}\cup F_{2p}$ into $q+2$ HCs and $2k-q-2$ $C_{4k}$-factors, and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+1\leq j\leq t-1$; when $q+2=2k$, we decompose $F_{2i-1}\cup F_{2i}$ into $2k$ HCs for each $2\leq i\leq p$ and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+1\leq j\leq t-1$. Then we have \[\{2kt-2k+2,2kt-2k+4,\ldots,2kt-2\}\in HW^*(n;n,4k).\]
If $2kt-2k<r\leq \frac{n-2}{2}$ and $q$ is odd, by Lemma 3.5, we may decompose $F_{1}\cup F_{2}\cup F_{2t-1}\cup{\rm X}$ into $4k-1$ HCs and a 1-factor. When $q+1=2k$,by Lemma 3.1, we may decompose each $F_{2i-1}\cup F_{2i}$into $2k$ HCs for each $2\leq i\leq p$ and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+1\leq i\leq t-1$; when $q+1\neq 2k$, we decompose $F_{2i-1}\cup F_{2i}$into $2k$ HCs for each $2\leq i\leq p-1$, $F_{2p-1}\cup F_{2p}$ into $q+1$ HCs and $2k-q-1$ $C_{4k}$-factors, and $F_{2j-1}\cup F_{2j}$ into $2k$ $C_{4k}$-factors for each $p+1\leq j\leq t-1$. Then we have \[\{2kt-2k+1,2kt-2k+3,\ldots,2kt-1\}\in HW^*(n;n,4k).\Box\]
Combining Proposition 3.3, Proposition 3.4 and Proposition 3.6, we have the main
result of this paper.
{\bf Theorem 3.7.}
$\{0,1,2,\ldots,\frac{n-2}{2}\}=HW^*(n;n,4k)$ for all positive
integers $n\equiv 0 \pmod{4k}$.
{\bf Proof.} For $n=4k$, the theorem is obvious by Theorem 1.5. For
$n=8k$, the result is also correct by Theorem 1.4. When $n>8k$, we
have $\frac{n}{2}-2k>2k$ and $\frac{n}{2}-4k+1\geq 2k+1$, then
combining with Proposition 3.3, Proposition 3.4 and Proposition 3.6
completes the proof. $\Box$
\section{\large\bf{Concluding remarks}}
It would be interesting to determine the necessary and sufficient
conditions for the existence of an $HW(n;r,s;n,k)$ for any even
integer $k$. As a first step, we proved in this paper that for any
integer $k\equiv 0\pmod 4$ the necessary condition for the existence
of $HW(n;r,s;n,k)$ is $n\equiv 0\pmod{k}$, and the necessary
condition is also sufficient. The next step is for the case when
$k\equiv 2\pmod 4$, we conjecture that for $k\equiv 2\pmod 4$ and
$s>0$ there exists an $HW(n;r,s;n,k)$ if and only if $n\equiv 0\pmod
k$.
\end{document}
|
math
|
\begin{document}
\title{On the interior regularity criterion and the number of singular points to the Navier-Stokes equations}
\author{Wendong Wang \,and\, Zhifei Zhang\\[2mm]
{\small School of Mathematical Sciences and BICMR, Peking University, Beijing 100871, P.R. China}\\[1mm]
{\small E-mail: wendong@math.pku.edu.cn and
zfzhang@math.pku.edu.cn}}
\date{January 3, 2012}
\maketitle
\begin{abstract}
We establish some interior regularity criterions of suitable weak
solutions for the 3-D Navier-Stokes equations, which allow the vertical part of the velocity to be large
under the local scaling invariant norm. As an application, we improve Ladyzhenskaya-Prodi-Serrin's criterion and
Escauriza-Seregin-\v{S}ver\'{a}k's criterion. We also show that if weak solution $u$ satisfies
\begin{eqnarray}o
\|u(\cdot,t)\|_{L^p}\le C(-t)^{\f {3-p}{2p}}
\end{eqnarray}o
for some $3<p<\infty$, then the number of singular points is finite.
\end{abstract}
\setcounter{equation}{0}
\section{Introduction}
We consider the three dimensional incompressible Navier-Stokes equations
\begin{equation}\label{eq:NS}
\left\{\begin{array}{l}
\partialartial_t u-\Delta u+u\cdot \nablabla u+\nablabla \partiali=0,\\
{\rm div } u=0,
\end{array}\right.
\end{equation}
where $u(x,t)=(u_1(x,t),u_2(x,t),u_3(x,t))$ denotes the unknown
velocity of the fluid, and the scalar function $\partiali(x,t)$ denotes the
unknown pressure.
In a seminal paper \cite{Leray}, Leray proved the global existence of weak solution with finite energy.
It is well known that weak solution is unique and regular in two spatial dimensions.
In three dimensions, however, the question of regularity and uniqueness of weak solution is an
outstanding open problem in mathematical fluid mechanics.
In a fundamental paper \cite{CKN},
Caffarelli-Kohn-Nirenberg proved that one-dimensional Hausdorff measure
of the possible singular points of suitable weak solution $u$ is zero (see also \cite{Lin, Tian, LS, Vasseur}).
The proof is based on the following $\varepsilon$-regularity criterion: there exists an $\varepsilon>0$ such that if $u$ satisfies
\begin{eqnarray}\label{eq:CKN}
\limsup_{r\rightarrow 0}r^{-1}\int_{Q_r(z_0)}|\nablabla u(y,s)|^2dyds\le \varepsilon,
\end{eqnarray}
then $u$ is regular at $z_0$. The same result remains true if (\ref{eq:CKN}) is replaced by
\begin{eqnarray}\label{eq:CKN1}
\limsup_{r\rightarrow 0}r^{-2}\int_{Q_r(z_0)}|u(y,s)|^3dyds\leq \varepsilon.
\end{eqnarray}
The quantities on the left hand side of (\ref{eq:CKN}) and (\ref{eq:CKN1}) are scaling invariant.
More general interior regularity criterions were obtained by Gustafson-Kang-Tsai \cite{GKT}
in terms of scaling invariant quantities (see Proposition \ref{prop:small regularity-GKT}).
In the first part of this paper, we will establish some interior regularity criterions, which allow the vertical part of the velocity to be large
under the local scaling invariant norm.
The proof is based on the blow-up argument and an observation that if the horizontal part of the velocity is small, then
the blow-up limit satisfies $u_h=0$, hence $\partial_3u_3=0$ and
\begin{eqnarray}o
\partial_t u_3-\Delta u_3+\partial_3\partiali=0,\quad \Delta \partiali=0.
\end{eqnarray}o
Using new interior regularity criterions, we improve Ladyzhenskaya-Prodi-Serrin regularity criterions,
which state if the weak solution $u$ satisfies
\begin{eqnarray}o
u\in L^q(0,T;L^p(\Bbb R^3))\quad\textrm{ with} \quad \f 2 q+\f 3p\le 1,\, p\ge 3,
\end{eqnarray}o
then it is regular in $(0,T)\times \Bbb R^3$, see \cite{Serrin, Giga, Struwe, ESS}. It should be pointed out
that the regularity in the class $L^\infty(0,T;L^3(\Bbb R^3))$ is highly nontrivial,
since it does not fall in the framework of small energy regularity.
This case was solved by Escauriza-Seregin-\v{S}ver\'{a}k \cite{ESS} by using blow-up analysis and the
backward uniqueness for the parabolic equation.
In Leary's paper \cite{Leray}, he also proved that if $[-T,0)$ is the maximal existence interval of smooth solution, then for $p>3$, there exits
$c_p>0$ such that
\begin{eqnarray}o
\|u(\cdot,t)\|_{L^p}\ge c_p(-t)^{\f {3-p}{2p}}.
\end{eqnarray}o
In general, if $u$ satisfies
\begin{eqnarray}\label{eq:blow-up}
\|u(\cdot,t)\|_{L^p}\le C(-t)^{\f {3-p}{2p}},
\end{eqnarray}
the regularity of the solution at $t=0$ remains unknown except $p=3$.
Recently, for the axisymmetric Navier-Stokes equations, important progress has been made by
Chen-Strain-Yau-Tsai \cite{chen2, chen} and Koch-Nadirashvili-Segegin-\v{S}ver\'{a}k \cite{KNSS},
where they showed that the solution does not develop Type I singularity (i.e, $\|u(\cdot,t)\|_{L^\infty}\le C(-t)^{-\f12}$)
by using De-Giorgi-Nash method and Liouville theorem respectively.
However, the case without the axisymmetric assumption is still open.
The second part of this paper will be devoted to show that
the number of singular points is finite if the solution satisfies (\ref{eq:blow-up}) for $3<p<\infty$.
The proof is based on an improved $\varepsilon$-regularity criterion: if the suitable weak solution $(u,\partiali)$
satisfies
\begin{eqnarray}o
&&\sup_{t\in [-1+t_0,t_0]}\int_{B_1(x_0)}|u(x,t)|^2dx+\int_{-1+t_0}^{t_0}\big(\int_{B_1(x_0)}|u(x,t)|^4dx\big)^{\f12}dt\\
&&\quad+\int_{-1+t_0}^{t_0}\big(\int_{B_1(x_0)}|\partiali(x,t)|^2dx\big)^{\f12}dt \leq \varepsilon_6,
\end{eqnarray}o
then $u$ is regular in $Q_{\f1 2}(z_0)$, see Proposition \ref{prop:small regularity-new}.
This paper is organized as follows. In section 2, we introduce some definitions and notations.
In section 3, we establish some new interior regularity criterions of suitable weak solutions.
In section 4, we apply them to improve Ladyzhenskaya-Prodi-Serrin's criterion and Escauriza-Seregin-\v{S}ver\'{a}k's criterion.
Section 4 is devoted to the proof of the number of singular points under the condition (\ref{eq:blow-up}).
In the appendix, we present the estimates of the pressure and some scaling invariant quantities.
\section{Definitions and notations}
Let us first introduce the definition of weak solution.
\begin{Definition} Let $\Om\subset \Bbb R^3$ and $T>0$. We say that $u$ is a Leray-Hopf weak solution
of (\ref{eq:NS}) in $\Om_T=\Om\times (-T,0)$ if
\begin{enumerate}
\item $u\in L^{\infty}(-T,0;L^2(\Om))\cap L^2(-T,0;H^1(\Om))$;
\item $u$ satisfies (\ref{eq:NS}) in the sense of distribution;
\item $u$ satisfies the energy inequality: for a.e. $t\in
[-T,0]$,
\begin{eqnarray}o \int_{\Om}|u(x,t)|^2dx+2\int_{-T}^t\int_{\Om}|\nablabla
u|^2 dxds\leq \int_{\Om}|u(x,-T)|^2dx.
\end{eqnarray}o
\end{enumerate}
Furthermore, the pair $(u,\partiali)$ is called a suitable weak solution if $\partiali\in L^{3/2}(\Om_T)$
and the energy inequality is replaced by the following local energy inequality:
for any nonnegative $\partialhi\in C_c^\infty(\Bbb R^3\times\Bbb R)$
vanishing in a neighborhood of the parabolic boundary of $\Om_T$,
\begin{eqnarray}o
&&\int_{\Om}|u(x,t)|^2\partialhi dx+2\int_{-T}^t\int_{\Om}|\nablabla u|^2\partialhi dxds\\
&&\quad\leq
\int_{-T}^t\int_{\Om}|u|^2(\partialartial_s\partialhi+\triangle\partialhi)+u\cdot\nablabla\partialhi(|u|^2+2\partiali)dxds,\quad \textrm{for a.e. } t\in [-T,0].
\end{eqnarray}o
\end{Definition}
\begin{Remark}\label{rem:weak solution}
In general, we don't know whether a Leray-Hopf weak solution is a
suitable weak solution. However, if $u$ is a Leray-Hopf weak
solution and $u\in L^4(\Om_T)$, then it is also a suitable weak
solution, which can be verified by using a standard mollification procedure.
\end{Remark}
Let $(u,\partiali)$ be a solution of (\ref{eq:NS}) and introduce the following scaling
\begin{eqnarray}\label{eq:scaling}
u^{\lambda}(x, t)={\lambda}u(\lambda x,\lambda^2 t),\quad \partiali^{\lambda}(x, t)={\lambda}^2\partiali(\lambda x,\lambda^2 t),
\end{eqnarray}
for any $\lambda> 0,$ then the family $(u^{\lambda}, \partiali^{\lambda})$ is also a solution of (\ref{eq:NS}).
We introduce some invariant quantities under the scaling (\ref{eq:scaling}):
\begin{eqnarray}o
&&A(u,r,z_0)=\sup_{-r^2+t_0\leq t<t_0}r^{-1}\int_{B_r(x_0)}|u(y,t)|^2dy,\\
&&C(u,r,z_0)=r^{-2}\int_{Q_r(z_0)}|u(y,s)|^3dyds,\\
&&E(u,r,z_0)=r^{-1}\int_{Q_r(z_0)}|\nablabla u(y,s)|^2dyds,\\
&&D(\partiali,r,z_0)=r^{-2}\int_{Q_r(z_0)}|\partiali(y,s)|^{\f32}dyds,
\end{eqnarray}o
where $z_0=(x_0,t), Q_r(z_0)=(-r^2+t_0,t_0)\times B_r(x_0)$,
and $B_r(x_0)$ is a ball of radius $r$ centered at $x_0$. We also
denote $Q_r$ by $Q_r(0)$ and $B_r$ by $B_r(0)$. We also denote
\begin{eqnarray}o
&&G(f,p,q;r,z_0)=r^{1-\frac3p-\frac2q}\|f\|_{L^{p,q}(Q_r(z_0))},\\
&&H(f,p,q;r,z_0)=r^{2-\frac3p-\frac2q}\|f\|_{L^{p,q}(Q_r(z_0))},\\
&&\widetilde{G}(f,p,q;r,z_0)=r^{1-\frac3p-\frac2q}\|f-(f)_{B_r(x_0)}\|_{L^{p,q}(Q_r(z_0))},\\
&&\widetilde{H}(f,p,q;r,z_0)=r^{2-\frac3p-\frac2q}\|f-(f)_{B_r(x_0)}\|_{L^{p,q}(Q_r(z_0))},
\end{eqnarray}o
where the mixed space-time norm $\|\cdot\|_{L^{p,q}(Q_r(z_0))}$ is defined by
\begin{eqnarray}o
\|f\|_{L^{p,q}(Q_r(z_0))}\buildrel\hbox{\footnotesize def}\over = \Big(\int_{t_0-r^2}^{t_0}\Big(\int_{B_r(x_0)}|f(x,t)|^pdx\Big)^{\f
q p}dt\Big)^\f 1q,
\end{eqnarray}o
and $(f)_{B_r(x_0)}$ is the average of $f$ in the ball $B_r(x_0)$. For the simplicity of notations, we denote
$$A(u,r,(0,0))=A(u,r),\quad \tilde{C}(u,r)=C(u-(u)_{B_r},r),\quad G(f,p,q;r,(0,0))=G(f,p,q;r)$$
and so on. These scaling invariant quantities will play an important role in the interior regularity theory.
Now we recall the definitions of Lorentz space and BMO space \cite{Graf}.
\begin{Definition} Let $\Omega\subset\Bbb R^n$ and $1\leq p,\ell \leq \infty$.
We say that a measurable function $f\in L^{p,\ell}(\Omega)$ if $\|f\|_{L^{p,\ell}(\Omega)}<+\infty$, where
\begin{eqnarray}o
\|f\|_{L^{p,\ell}(\Omega)}\buildrel\hbox{\footnotesize def}\over =
\left\{\begin{array}{l}
\Big(\int_0^{\infty}\sigma^{\ell-1}|\{x\in \Omega;|f|>\sigma\}|^{\frac{\ell}{p}}d\sigma\Big)^{\f 1{\ell}}\quad
\textrm{for } \ell<+\infty,\\
\displaystyle\sup_{\sigma>0}\sigma|\{x\in \Omega;|f|>\sigma\}|^{\frac{1}{p}}\quad
\textrm{for } \ell=+\infty.
\end{array}\right.
\end{eqnarray}o
Moreover, $f(x,t)\in L^{q,s}(-T,0; L^{p,\ell}(\Omega))$ if $\|f(\cdot,t)\|_{L^{p,\ell}(\Omega)}\in L^{q,s}(-T,0)$.
\end{Definition}
The following facts will be used frequently: for any $R>0$,
\begin{eqnarray}\label{eq:lorentz-inc}
&&\|f\|_{L^{p,\ell_1}}\le \|f\|_{L^{p,\ell_2}},\quad\textrm{ if }\quad \ell_1\ge \ell_2;\\
&&\|f\|_{L^{p_1}(\Om)}^{p_1}\le C\big(R^{p_1}|\Om|+R^{p_1-p}\|f\|_{L^{p,\infty}(\Om)}^p\big),\quad\textrm{ if }p>p_1.\label{eq:lorentz}
\end{eqnarray}
Recall that a local integrable function $f\in BMO(\Bbb R^n)$ if it satisfies
\begin{eqnarray}o
\sup_{R>0,x_0\in\Bbb R^n}\frac{1}{|B_R(x_0)|}\int_{B_R(x_0)}|f(x)-f_{B_R(x_0)}|dx<\infty.
\end{eqnarray}o
Moreover, $f(x)\in VMO(\Bbb R^n)$ if $f(x)\in BMO(\Bbb R^n)$ and for any $x_0\in \Bbb R^n$,
$$
\limsup_{R\downarrow 0}\frac{1}{|B_R(x_0)|}\int_{B_R(x_0)}|f(x)-f_{B_R(x_0)}|dx=0.
$$
We say that a function $u\in BMO^{-1}(\Bbb R^n)$ if there exist $U_j\in
BMO(\Bbb R^n)$ such that $u=\sum_{j=1}^n\partial_jU_j$. $VMO^{-1}(\Bbb R^n)$ is
defined similarly. A remarkable property of $BMO$ function is
\begin{eqnarray}o
\sup_{R>0,x_0\in\Bbb R^n}\frac{1}{|B_R(x_0)|}\int_{B_R(x_0)}|f(x)-f_{B_R(x_0)}|^qdx<\infty.
\end{eqnarray}o
for any $1\le q<\infty$.
Let us conclude this section by recalling the following $\varepsilon$-regularity results.
Here and what follows, we define a solution $u$ to be regular at $z_0=(x_0,t_0)$ if $u\in L^\infty(Q_r(z_0))$ for some $r>0$.
\begin{Proposition}\label{prop:small regularity-CKN}\cite{CKN, LS}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1(z_0)$. There exists an
$\varepsilon_0>0$ such that if
\begin{eqnarray}o
\int_{Q_1(z_0)}|u(x,t)|^3+|\partiali(x,t)|^{3/2}dxdt\leq \varepsilon_0,
\end{eqnarray}o
then $u$ is regular in $Q_{\f 12}(z_0)$. Moreover, $\partiali$ can be replaced by
$\partiali-(\partiali)_{B_r}$ in the integral.
\end{Proposition}
\begin{Proposition}\label{prop:small regularity-GKT}\cite{GKT}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1(z_0)$ and $w=\nablabla\times u$.
There exists an $\varepsilon_1>0$ such that if one of the following two conditions holds,
\begin{enumerate}
\item $G(u,p,q;r,z_0)\leq \varepsilon_1$ for any $0<r<\f12$, where $1\leq \frac3p+\frac2q\leq 2$;
\item $H(w,p,q;r,z_0)\leq \varepsilon_1$ for any $0<r<\f12$, where $2\leq \frac3{p}+\frac2{q}\leq 3$ and $(p,q)\neq(1,\infty)$;
\end{enumerate}
then $u$ is regular at $z_0$.
\end{Proposition}
\section{Interior regularity criterions of suitable weak solution}
The purpose of this section is to establish some interior regularity criterions,
which allow the vertical part of the velocity to be large
under the local scaling invariant norm.
These results improve some classical results and Gustafson-Kang-Tsai's result (Proposition \ref{prop:small regularity-GKT}).
Set $u=(u_h,u_3)$. Let us state our main results.
\begin{Theorem}\label{thm:interior-uball}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$ and satisfy
\begin{eqnarray}o
C(u,1)+D(\partiali,1)\leq M.
\end{eqnarray}o
Then there exists a positive constant $\varepsilon_2$ depending on $M$ such that if
\begin{eqnarray}o
C(u_h,1)\leq \varepsilon_2,
\end{eqnarray}o
then $u$ is regular at $(0,0)$.
\end{Theorem}
\begin{Theorem}\label{thm:interior-u}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$ and satisfy
\begin{eqnarray}o
G(u,p,q;r)\leq M\quad\textrm{ for any }0<r<1,
\end{eqnarray}o
where $1\le \frac3p+\frac2q<2$, $1<q\leq \infty$.
There exists a positive constant $\varepsilon_3$ depending on $p, q, M$
such that $(0,0)$ is a regular point if
\begin{eqnarray}o
G(u_h,p,q;r^*)\leq \varepsilon_3
\end{eqnarray}o
for some $r^*$ with $0<r^*<\min\{\frac1 2, (C(u,1)+D(\partiali,1))^{-2}\}$.
\end{Theorem}
\begin{Theorem}\label{thm:interior-gradient}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$ and satisfy
\begin{eqnarray}o
H(\nablabla u,p,q;r)\leq M\quad \textrm{for any } 0<r<1,
\end{eqnarray}o
where $2\le \frac3{p}+\frac2{q}<3, 1<p\le \infty$.
There exists a positive constant $\varepsilon_4$ depending on $p, q, M$ such that $(0,0)$ is a regular point if
\begin{eqnarray}\label{eq:3.6}
H(\nablabla u_h,p,q;r^*)\leq \varepsilon_4
\end{eqnarray}
for some $r^*$ with $0<r^*<\min\{\frac12, (C(u,1)+D(p,1))^{-2}\}$.
\end{Theorem}
\begin{Remark}
As a special case of Theorem \ref{thm:interior-u}, it follows that
$u$ is regular if
$$|u_3|\le \frac{M}{\sqrt{T-t}},\quad |u_h|\leq\frac{\varepsilon_3}{\sqrt{T-t}},$$
which improves Leray's result \cite{Leray}.
And from Theorem \ref{thm:interior-gradient}, it follows that
$u$ is regular at $(0,0)$ if for any $0<r<1$,
\begin{eqnarray}o
r^{-1}\int_{Q_r}|\nablabla u_3|^2dxdt\leq M^2,\quad r^{-1}\int_{Q_r}|\nablabla u_h|^2dxdt\leq \varepsilon_4^2,
\end{eqnarray}o
which improves Caffarelli-Kohn-Nirenberg's result \cite{CKN}.
\end{Remark}
The proof of Theorem \ref{thm:interior-uball} is based on compactness argument and the following lemma.
\begin{Lemma}\label{lem:c-d}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$ and $\widetilde{D}(\partiali,1)\leq M$.
Then $u$ is regular at $(0,0)$ if
\begin{eqnarray}o
C(u,r_0)\leq c\varepsilon_0^{9/5}r_0^{8/5}\quad\textrm{ for some }0<r_0\leq 1.
\end{eqnarray}o
Here $c$ is a small constant depending on $M$.
\end{Lemma}
\noindent{\textbf{Proof.}} By (\ref{eq:pressure}) and H\"{o}lder inequality, for $0<r<{r_0}/4$ we have
\begin{eqnarray}o
C(u,r)+\widetilde{D}(\partiali,r)&\leq&\frac{r_0^2}{r^2}C(u,r_0)+C(\frac{r}{r_0})^{5/2} \widetilde{D}(\partiali,r_0)+C\frac{r_0^2}{r^2}C(u,r_0)\\
&\leq& CM\frac{r^{5/2}}{r_0^{9/2}}+C\frac{r_0^2}{r^2}C(u,r_0).
\end{eqnarray}o
Choosing $r=(\frac{\varepsilon_0}{2CM})^{2/5}r_0^{9/5}$
and by assumption, we infer that
\begin{eqnarray}o
C(u,r)+\widetilde{D}(\partiali,r)<\varepsilon_0,
\end{eqnarray}o
which implies that $(0,0)$ is a regular point by Proposition \ref{prop:small regularity-CKN}. \hphantom{MM}
\llap{$\square$}\goodbreak
Now let us turn to the proof of Theorem \ref{thm:interior-uball}.
\noindent{\textbf{Proof of Theorem \ref{thm:interior-uball}}.
Assume that the statement of the proposition is false, then there exist a constant $M$ and a sequence $(u^k,\partiali^k)$,
which are suitable weak solutions of (\ref{eq:NS}) in $Q_1$ and singular at $(0,0)$, and satisfies
\begin{eqnarray}o
C(u^k,1)+D(\partiali^k,1)\leq M,\quad C(u_h^k,1)\leq \frac 1k.
\end{eqnarray}o
Then by the local energy inequality, it is easy to get
\begin{eqnarray}o
A(u^k,3/4)+E(u^k,3/4)\leq C(M),
\end{eqnarray}o
hence by using Lions-Aubin's lemma, there exists a suitable weak solution $(v, \partiali')$ of (\ref{eq:NS}) such that (at most up to subsequence),
\begin{eqnarray}o
u^k\rightarrow v,\quad u_h^k\rightarrow 0 \quad {\rm in} \quad L^3(Q_\f12),\quad
\partiali^k\rightharpoonup \partiali'\quad {\rm in} \quad L^\f {3}2(Q_\f12),
\end{eqnarray}o
as $k\rightarrow+\infty$. That is, $v_h=0$, which gives $\partialartial_3v_3=0$ by $\nablabla\cdot v=0$, and hence,
\begin{eqnarray}o
&\partialartial_tv_3+\partial_3 \partiali'-\triangle v_3=0,\quad -\triangle\partiali'=0,\quad \textrm{or}\\
&\partial_tv+\nabla \partiali'-\triangle v=0,
\end{eqnarray}o
which implies that $|v|\leq C(M)$ in $Q_{1/4}$ by the classical result of linear Stokes equation(see \cite{chen2} for example).
However, $(0,0)$ is a singular point of $u^k$, hence by Lemma \ref{lem:c-d}, for any $0<r<1/4$,
\begin{eqnarray}o
c\varepsilon_0^{9/5}r^{8/5}&\leq& \lim_{k\rightarrow\infty}r^{-2}\int_{Q_r}|u^k|^3dxdt\\
&\leq& \lim_{k\rightarrow\infty} C(v,r)
\leq C(M)r^{3},
\end{eqnarray}o
which is a contradiction by letting $r\rightarrow 0$.\hphantom{MM}
\llap{$\square$}\goodbreak
The proof of Theorem \ref{thm:interior-u} is motivated by \cite{Se3} and based on the blow-up argument.
\noindent{\textbf{Proof of Theorem \ref{thm:interior-u}}}.
Assume that the statement of the proposition is false, then there exist constants $p, q, M$ and a sequence $(u^k,\partiali^k)$,
which are suitable weak solutions of (\ref{eq:NS}) in $Q_1$ and singular at $(0,0)$, and satisfy
\begin{eqnarray}o
&&G(u^k,p,q;r)\leq M \quad {\rm for \,\,all}\quad 0<r<1,\\
&&G(u_h^k,p,q;r_k)\leq \frac1k,
\end{eqnarray}o
where $0<r_k<\min\{\frac12, (C(u^k,1)+D(\partiali^k,1))^{-2}\}$. Then it follows from Lemma \ref{lem:invariant} that
\begin{eqnarray}o
A(u^k,r)+E(u^k,r)+D(\partiali^k,r)\leq C(M,p,q)
\end{eqnarray}o
for any $0<r<r_k.$
Set $v^k(x,t)=r_ku^k(r_kx,r_k^2t), q^k(x,t)=r_k^2\partiali^k(r_kx,r_k^2t)$. Then
\begin{eqnarray}o
A(v^k,r)+E(v^k,r)+D(q^k,r)\leq C(M,p,q)
\end{eqnarray}o
for any $0<r<1$. Lions-Aubin's lemma ensures that there exists a suitable weak solution $(\bar{v}, \bar{\partiali})$ of (\ref{eq:NS})
such that (at most up to subsequence),
\begin{eqnarray}o
&&v^k\rightarrow \bar{v}\quad {\rm in} \quad L^3(Q_\f12),\quad q^k\rightharpoonup \bar{q}\quad {\rm in} \quad L^\f {3}2(Q_\f12),\\
&& v_h^k\rightharpoonup 0\quad {\rm in} \quad L^q((-\f14,0); L^p(B_\f12)),
\end{eqnarray}o
as $k\rightarrow+\infty$.
Then we have $\bar{v}_h=0$ and
$$\partialartial_t\bar{v}_3+\partial_3 \bar{q}-\triangle \bar{v}_3=0,$$
which implies that $|\bar{v}_3|\leq C(M)$ in $Q_{\f14}$.
However, $(0,0)$ is a singular point of $v^k$,
hence by Proposition \ref{prop:small regularity-CKN} and (\ref{eq:pressure1}), for any $0<r<1/4$,
\begin{eqnarray}o
\varepsilon_0 &\leq&\liminf_{k\rightarrow\infty}r^{-2}\int_{Q_r}|v^k|^3+|q^k|^{3/2}dxdt\\
&\leq& C\liminf_{k\rightarrow\infty}\Big(C(\bar{v},r)+\frac{r}{\rho}D(q^k,\rho)+(\frac{\rho}{r})^2C(v^k,\rho)\Big)\\
&\leq& C\Big(C(\bar{v},r)+\frac{r}{\rho}+(\frac{\rho}{r})^2C(\bar{v},\rho)\Big)\\
&\leq& Cr^{1/2}\quad (\textrm{by choosing}\quad \rho=r^{\f12}),
\end{eqnarray}o
which is a contradiction if we take $r$ small enough.\hphantom{MM}
\llap{$\square$}\goodbreak
\noindent{\bf Proof of Theorem \ref{thm:interior-gradient}}. Without loss of generality, let us assume that
\begin{eqnarray}o
\f 83<\f 3p+\f 2q<3.
\end{eqnarray}o
The other case can be reduced to it by H\"{o}lder inequality. By Lemma \ref{lem:invariant}, we have
\begin{eqnarray}o
A(u,r)+E(u,r)+D(\partiali,r)\leq C(M)\big(r^{1/2}\big(C(u,1)+D(\partiali,1)\big)+1\big)\leq C(M),
\end{eqnarray}o
for any $0<r\leq r_1\triangleq\min\{\frac12, \big(C(u,1)+D(p,1)\big)^{-2}\}$. This together with interpolation inequality gives
\begin{eqnarray}\label{eq:3.7}
C(u,r)\leq C(M)\quad \textrm{for any } 0<r\leq r_1.
\end{eqnarray}
We get by Poinc\'{a}re inequality that
\begin{eqnarray}o
\widetilde{G}(u_h,p_1,q_1;r)\le C\widetilde{H}(\nabla u_h,p,q;r),
\end{eqnarray}o
where $p_1=\f {3p} {3-p}, q_1=q$, hence it follows from (\ref{eq:7.9}) and (\ref{eq:3.6}) that
\begin{eqnarray}o
\widetilde{C}(u_h,r^*)&\leq& C(M)\big(A(u_h,r^*)+E(u_h,r^*)\big)^{\frac{1-3\delta}{1-2\delta}}
\widetilde{G}(u_h,p_1,q_1;r^*)^{\frac{1}{1-2\delta}},\\
&\leq& C(M)\widetilde{G}(u_h,p_1,q_1;r^*)^{\frac{1}{1-2\delta}}\le C(M)\varepsilon_4^{\frac{1}{1-2\delta}},
\end{eqnarray}o
where $\delta=2-\frac3{p_1}-\frac2{q_1}\in (0,\f13)$, hence by (\ref{eq:3.7}) for $0<r<r^*$
\begin{eqnarray}o
C(u_h,r)\leq C(\frac{r}{r^*})C(u_h,r^*)+C(\frac{r^*}{r})^2\widetilde{C}(u_h,r^*)\leq C(M)
\big(\frac{r}{r^*}+(\frac{r^*}{r})^2\varepsilon_4^{\frac{1}{1-2\delta}}\big).
\end{eqnarray}o
Taking $r$ small enough, and then $\varepsilon_4$ small enough such that
\begin{eqnarray}o
C(u_h,r)\leq \varepsilon_3^3.
\end{eqnarray}o
Then the result follows from Theorem \ref{thm:interior-u}.\hphantom{MM}
\llap{$\square$}\goodbreak
\section{Applications of interior regularity criterions}
\subsection{Ladyzhenskaya-Prodi-Serrin's criterion}
Using the interior regularity criterions established in Section 3, we present Ladyzhenskaya-Prodi-Serrin's type criterions
in Lorentz spaces.
\begin{Theorem}\label{thm:serrin}
Let $u$ be a Leray-Hopf weak solution of (\ref{eq:NS}) in $\Bbb R^3\times(-1,0)$. Assume that $u$ satisfies
\begin{eqnarray}\label{eq:4.8}
\|u\|_{L^{q,\infty}((-1,0); L^{p,\infty}(\Bbb R^3))}\leq M,\quad \|u_h\|_{L^{q,\ell}((-1,0); L^{p,\infty}(\Bbb R^3))}<\infty,
\end{eqnarray}
where $\frac3p+\frac2q=1,$ $ 3<p\leq \infty$, and $1\leq \ell<\infty$. Then $u$ is regular in $\Bbb R^3\times(-1,0]$.
For $\ell=\infty$ or $p=3$, the same result holds if the second condition of (\ref{eq:4.8}) is replaced by
\begin{eqnarray}o
\|u_h\|_{L^{q,\infty}((-1,0); L^{p,\infty}(\Bbb R^3))}\leq \varepsilon_5,
\end{eqnarray}o
where $\varepsilon_5$ is a small constant depending on $M$.
\end{Theorem}
\begin{Remark}
For $\ell=\infty$, we improve Kim-Kozono's result \cite{Kim} and He-Wang's result \cite{He}, where the smallness of all components of the velocity
is imposed. In general case, we improve Sohr's result \cite{Sohr} by allowing the vertical part of the velocity to fall in weak $L^p$ space.
\end{Remark}
\begin{Remark}
Under the condition (\ref{eq:4.8}), it can be verified that Leray-Hopf weak solution is suitable weak solution.
We left it to the interested readers.
\end{Remark}
The proof is based on the following lemma.
\begin{Lemma}\label{lem:local bound}
Assume that $u$ satisfies
\begin{eqnarray}o
\|u\|_{L^{q,\infty}((-1,0); L^{p,\infty}(\Bbb R^3))}\leq m,
\end{eqnarray}o
where $\frac3p+\frac2q=1,$ $ 3\leq p\leq \infty$. Then for any $0<r<1$ and $0<\epsilon<1$, there hold
\begin{eqnarray}o
&&G(u,\frac{9}{10}p,\frac{4}{5}q;r)\leq C\epsilon^{\frac{4q}{5}}+C\epsilon^{-\frac{q}{5}}m^{q},\quad 3<p<\infty, \\
&&A(u,r)\leq C\epsilon^2+C\epsilon^{-1}m^3,\quad p=3, \\
&&G(u,\infty,\f32;r)\leq C\epsilon^{3/2}+C\epsilon^{-1/2}m^2,\quad p=\infty.
\end{eqnarray}o
\end{Lemma}
\noindent{\bf Proof}. First we consider the case of $3<p<\infty$. Using
the definition of Lorentz space, we infer that
\begin{eqnarray}o
&&r^{(\frac45-\f 8{3p})q-2}\int_{-r^2}^0\Big(\int_{B_r}|u|^{\f {9}{10}p}dx\Big)^{\f {8q} {9p}}dt\\
&&\leq Cr^{(\frac45-\f 8{3p})q-2}\int_{-r^2}^0\Big(\int_0^{\infty}\sigma^{\f {9} {10}p-1}|\{x\in B_r; |u(x,t)|>\sigma\}| d\sigma \Big)^{\f {8q} {9p}}dt\\
&&\leq Cr^{(\frac45-\f 8{3p})q-2}\Big(r^2 (r^3R^{\f {9}{10}p})^{\f {8q} {9p}}
+\int_{-r^2}^0\Big(\int_R^{\infty}\sigma^{\f {9} {10}p-1}|\{x\in B_r; |u(x,t)|>\sigma\}| d\sigma \Big)^{\f {8q} {9p}}dt\Big)\\
&&\leq Cr^{(\frac45-\f 8{3p})q-2}\Big(r^2 (r^3R^{\f {9}{10}p})^{\f {8q}
{9p}}+R^{-\f {4q}{45}}\int_{-r^2}^0\|u(\cdot,t)\|_{L^{p,\infty}}^{\f {8q} {9}}dt\Big)\\
&&\leq Cr^{(\frac45-\f 8{3p})q-2}\Big(r^2 (r^3R^{\f {9}{10}p})^{\f {8q}
{9p}}+R^{-\f {4q}{45}}r^{-(1-\frac3p)\f {8q} 9+2}I(r)\Big)\\
&&\leq C\epsilon^{\f {4q} 5}+C\epsilon^{-\f
{4q}{45}}I(r),
\end{eqnarray}o
where we take $R=\epsilon r^{-1}$ and the estimate of $I(r)$ is given by
\begin{eqnarray}o
&&I(r)\equiv r^{(1-\frac3p)\f {8q} 9-2}\int_{-r^2}^0\|u(\cdot,t)\|_{L^{p,\infty}(B_1)}^{\f {8q} 9}dt\\
&&\leq Cr^{(1-\frac3p)\f {8q} 9-2}\int_{0}^{\infty}\sigma^{\f {8q} 9-1}|\{t\in(-r^2,0); \|u(\cdot,t)\|_{L^{p,\infty}}>\sigma \}|d\sigma\\
&&\leq Cr^{(1-\frac3p)\f {8q} 9-2}\Big(R^{\f {8q} 9}r^2+\int_{R}^{\infty}\sigma^{\f {8q} 9-1}|\{t\in(-r^2,0); \|u(\cdot,t)\|_{L^{p,\infty}}>\sigma \}|d\sigma\Big)\\
&&\leq Cr^{(1-\frac3p)\f {8q} 9-2}\Big(R^{\f {8q} 9}r^2+R^{-\f q9}\|u\|_{L^{q,\infty}(-1,0; L^{p,\infty}(B_1))}^q\Big)\\
&&\leq Cr^{(1-\frac3p)\f {8q} 9-2}\Big(R^{\f {8q} 9}r^2+R^{-\f q9}m^q\Big)\\
&&\leq C\epsilon^{\f {8q} 9}+C\epsilon^{-\f {q} 9}m^q\quad
(R=\epsilon r^{\frac3p-1}).
\end{eqnarray}o
This gives the first inequality. For $p=3$, we consider
\begin{eqnarray}o
\sup_{-r^2<t<0}r^{-1}\int_{B_r}|u|^2dx
&\leq& C\sup_{-r^2<t<0}r^{-1}\int_0^{\infty}\sigma|\{x\in B_r; |u(x,t)|>\sigma\}|d\sigma\\
&\leq& C\sup_{-r^2<t<0}r^{-1}\Big(R^2r^3+\int_R^{\infty}\sigma|\{x\in B_r; |u(x,t)|>\sigma\}|d\sigma\Big)\\
&\leq& C\sup_{-r^2<t<0}r^{-1}\Big(R^2r^3+R^{-1}\|u(\cdot,t)\|_{L^{3,\infty}}^3\Big),
\end{eqnarray}o
which gives the second inequality by taking $R=\epsilon r$. Let $g(t)=\|u(\cdot,t)\|_{L^\infty(B_1)}$. Then we have
\begin{eqnarray}o
r^{-1/2}\int_{-r^2}^0g(t)^{3/2}dt
&\leq& Cr^{-1/2}\int_0^{\infty}\sigma^{\f12}|\{t\in (-r^2,0); |g(t)|>\sigma\}|d\sigma\\
&\leq& Cr^{-1/2}\Big( R^{\f32}r^2+\int_R^{\infty}\sigma^{\f12}|\{t\in (-r^2,0); |g(t)|>\sigma\}|d\sigma\Big)\\
&\leq& Cr^{-\f 12}\Big(R^{\f32}r^2+R^{-\f12}m^2\Big),
\end{eqnarray}o
which gives the third inequality by taking $R=\epsilon r$.
\hphantom{MM}
\llap{$\square$}\goodbreak
\noindent{\bf Proof of Theorem \ref{thm:serrin}}. By translation invariance and Theorem \ref{thm:interior-u},
it suffices to show that
\begin{eqnarray}\label{eq:5.3}
G(u,p_1,q_1;r)\leq M,\quad G(u_h,p_1,q_1;r)\leq \varepsilon_3,
\end{eqnarray}
for any $0<r<1/2$ and some $(p_1,q_1)$ with $1\leq \frac3{p_1}+\frac2{q_1}<2$.
For $3<p<\infty$, let $p_1=\frac{9}{10}p$ and $q_1=\frac{4}{5}q$, then $\frac3{p_1}+\frac2{q_1}<\frac54<2$.
For $\ell<\infty$, we have $\| u_h\|_{L^{q,\ell}(-r^2,0; L^{p,\infty}(B_r))}\rightarrow 0$ as $r\rightarrow 0$.
Hence by Lemma \ref{lem:local bound}, the condition (\ref{eq:5.3}) holds
if we take $\epsilon$ small enough, and then take $r$ small enough.
The proof of the other cases is similar. We omit the details.
\hphantom{MM}
\llap{$\square$}\goodbreak
\subsection{Escauriza-Seregin-\v{S}ver\'{a}k's criterion}
The following theorem improves Escauriza-Seregin-\v{S}ver\'{a}k's criterion by noting the inclusion
\begin{eqnarray}o
L^3(\Bbb R^3)\subset L^{3,\ell}(\Bbb R^3)\quad\textrm{ for }\ell>3 \quad \textrm{and} \quad L^3(\Bbb R^3)\subset VMO^{-1}(\Bbb R^3).
\end{eqnarray}o
\begin{Theorem}\label{thm:main2}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $\Bbb R^3\times(-1,0)$. If
\begin{eqnarray}o
\|u_h\|_{L^{\infty}((-1,0); L^{3,\ell}(\Bbb R^3))}+\|u_3\|_{L^{\infty}((-1,0); BMO^{-1}(\Bbb R^3))}=M<\infty,
\end{eqnarray}o
for some $\ell<\infty$, and $u_3(x,t)\in VMO^{-1}(\Bbb R^3)$ for $t\in (-1,0]$,
then $u$ is regular in $\Bbb R^3\times(-1,0]$.
\end{Theorem}
We need the following lemma, which gives a bound of local scaling invariant energy.
\begin{Lemma}\label{lem:energy}
Under the assumptions of Theorem \ref{thm:main2}, there holds
\begin{eqnarray}o
A(u,r)+E(u,r)+D(\partiali,r)\leq C(M, C(u,1), D(\partiali,1))\quad\textrm{ for any } \,0<r<1/2.
\end{eqnarray}o
\end{Lemma}
\noindent{\bf Proof.}
Let $\zeta(x,t)$ be a smooth function with $\zeta\equiv 1$ in $Q_r$
and $\zeta=0$ in $Q_{2r}^c$. Since $u_3\in
L^{\infty}(-1,0;BMO^{-1}(\Bbb R^3))$, there exists $U(x,t)\in
L^{\infty}(-1,0;BMO(\Bbb R^3))$ such that $u_3=\nabla\cdot U$. We have by
H\"{o}lder inequality that \begin{eqnarray}o
&&r^{-2}\int_{Q_{2r}}|u_3|^3\zeta^2dxdt\\
&&=r^{-2}\int_{Q_{2r}}\sum_{j=1}^3\partialartial_j U_j\cdot u_3 |u_3|\zeta^2dxdt\\
&&\leq 6r^{-2}\int_{Q_{2r}}|U-U_{B_{2r}}|(|\nablabla u_3| |u_3|+|u_3|^2|\nablabla\zeta|)dxdt\\
&&\leq 6r^{-2}\Big(\int_{Q_{2r}}|U-U_{B_{2r}}|^6dxdt\Big)^{1/6}\Big(\int_{Q_{2r}}|\nablabla u_3|^2dxdt\Big)^{1/2}\Big(\int_{Q_{2r}}|u_3|^3dxdt\Big)^{1/3}\\
&&\quad+12r^{-3}\Big(\int_{Q_{2r}}|U-U_{B_{2r}}|^3dxdt\Big)^{1/3}\Big(\int_{Q_{2r}}|u_3|^3dxdt\Big)^{2/3},
\end{eqnarray}o
which implies that
\begin{eqnarray}\label{eq:4.10}
C(u_3,r)\leq
C(M)\big(E(u,2r)^{1/2}C(u,2r)^{1/3}+ C(u,2r)^{2/3}\big).
\end{eqnarray}
On the other hand, we have by Lemma \ref{lem:local bound} that
\begin{eqnarray}o
A(u_h,r)\leq C(M)\quad \textrm{for any } 0<r<1,
\end{eqnarray}o
which along with the interpolation
inequality gives
\begin{eqnarray}\label{eq:4.11}
C(u_h,r)\leq A(u,r)^{3/4}\big(E(u,r)+A(u,r)\big)^{3/4}\leq
C(M)\big(E(u,r)+A(u,r)\big)^{3/4}.
\end{eqnarray}
We infer from (\ref{eq:4.10}) and (\ref{eq:4.11}) that
\begin{eqnarray}o
C(u,r)\leq C(M)\big(E(u,2r)+A(u,2r)+C(u,2r)\big)^{5/6}.
\end{eqnarray}o
With this, following the proof of Lemma \ref{lem:invariant}, we conclude the result. \hphantom{MM}
\llap{$\square$}\goodbreak
\noindent{\bf Proof of Theorem \ref{thm:main2}.}\,
Following \cite{ESS}, the proof is based on the blow-up analysis and unique continuation theorem.
Without loss of generality, assume that $(0,0)$ is a singular point.
Then by Theorem \ref{thm:interior-u}, there exists a sequence of $r_k\downarrow 0$ such that
\begin{eqnarray}\label{eq:4.2}
r_k^{-2}\int_{Q_{r_k}}|u_h|^3dxdt\geq \varepsilon_1.
\end{eqnarray}
Let $u^k(x,t)=r_ku(r_kx,r_k^2t)$ and $\partiali^k(x,t)=r_k^2\partiali(r_kx,r_k^2t)$. Then
for any $a>0$ and $k$ large enough, it follows from Lemma \ref{lem:energy} that
\begin{eqnarray}o
A(u^k,a)+E(u^k,a)+C(u^k,a)+D(\partiali^k,a)\leq C(M,D(\partiali,1)).
\end{eqnarray}o
Using Lions-Aubin lemma, there exists $(v,\partiali')$ such that for any $a,T>0$ (up to subsequence)
\begin{eqnarray}o
&&u^k\rightarrow v \quad {\rm in} \quad L^3(B_{a}\times (-T,0)),\\
&&u^k\rightarrow v \quad {\rm in} \quad C([-T,0];L^{9/8}(B_{a})),\\
&&\partiali^k\rightharpoonup \partiali' \quad {\rm in} \quad L^\f 32(-T,0;L^\infty(B_a)),
\end{eqnarray}o
as $k\rightarrow+\infty$ (see the proof of Theorem 4.1 in \cite{WZ} for the details). Furthermore, there hold
\begin{eqnarray}\label{eq:4.15}
\|v_h\|_{L^{\infty}(-a^2,0; L^{3,\ell}(\Bbb R^3))}\leq \sup_{k}\|u_h^k\|_{L^{\infty}(-a^2,0; L^{3,\ell}(\Bbb R^3))}\leq M,
\end{eqnarray}
and for any $z_0=(x_0,t_0)\in (-T+1,0)\times \Bbb R^3$,
\begin{eqnarray}\label{eq:4.16}
A(v,1;z_0)+E(v,1;z_0)+C(v,1;z_0)+D(\partiali',1;z_0)\leq C(M,D(p,1)).
\end{eqnarray}
Due to (\ref{eq:4.15}) and (\ref{eq:lorentz}), we infer that
\begin{eqnarray}o
\int_{Q_1(z_0)}|v_h|^2dxdt\rightarrow0, \quad {\rm as}\,\, z_0\rightarrow\infty,
\end{eqnarray}o
which along with (\ref{eq:4.16}) implies that
\begin{eqnarray}o
\int_{Q_1(z_0)}|v_h|^3dxdt\rightarrow0, \quad {\rm as}\,\, z_0\rightarrow\infty.
\end{eqnarray}o
Hence by Theorem \ref{thm:interior-uball}, there exists $R>0$ such that
\begin{eqnarray}o
|v(x,t)|+|\nablabla v(x,t)|\leq C, \quad (t,x)\in (-T+1,0)\times \Bbb R^3\backslash{B_R}.
\end{eqnarray}o
Due to $u_h(x,0)\in L^{3,\ell}$, we infer that
\begin{eqnarray}o
\int_{B_a}|v_h(x,0)| dx
&\leq& \int_{B_a}|v_h(x,0)-u_h^k(x,0)|dx+\int_{B_a}|u_h^k(x,0)| dx\\
&\leq&\int_{B_a}|v_h(x,0)-u_h^k(x,0)|dx+r_k^{-2}\int_{B_{ar_k}}|u_h(y,0)|dy\\
&\le&\int_{B_a}|v_h(x,0)-u_h^k(x,0)|dx+C\|u_h(0)\|_{L^{3,\ell}(B_{ar_k})}\\
&\longrightarrow& 0,\quad\textrm{ as } k\rightarrow\infty,
\end{eqnarray}o
which implies $v_h(x,0)=0$ a.e. $\Bbb R^3$. And due to $u_3(x,0)\in VMO^{-1}(\Bbb R^3)$, we have $v_3(x,0)=0$
(see Theorem 4.1 in \cite{WZ}).
Let $w=\nablabla\times v$, then $w(x,0)=0$ and
$$|\partialartial_t w-\triangle w|\leq C(|w|+|\nablabla w|),\quad (-T+1,0)\times \Bbb R^3\backslash{B_R}. $$
By the backward uniqueness property of parabolic operator \cite{ESS}, we have
$w=0$ in $(-T+1,0)\times \Bbb R^3\backslash{B_R}.$ Similar arguments as in \cite{ESS}, using spacial unique continuation
we have $w\equiv 0$ in $(-T+1,0)\times \Bbb R^3$, which implies $\triangle v\equiv 0$ in $(-T+1,0)\times \Bbb R^3$,
hence $v_h\equiv 0$ in $(-T+1,0)\times \Bbb R^3$,
since $v_h(\cdot,t)\in L^{3,\ell}$. This is a contradiction to (\ref{eq:4.2}).\hphantom{MM}
\llap{$\square$}\goodbreak
\section{The number of singular points}
\subsection{An improved $\varepsilon$-regularity criterion}
We need the following improved version, which may be independent of interest.
\begin{Proposition}\label{prop:small regularity-new}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1(z_0)$.
There exists an $\varepsilon_6>0$ such that if
\begin{eqnarray}o
&&\sup_{t\in [-1+t_0,t_0]}\int_{B_1(x_0)}|u(x,t)|^2dx+\int_{-1+t_0}^{t_0}\big(\int_{B_1(x_0)}|u(x,t)|^4dx\big)^{\f12}dt\\
&&\quad+\int_{-1+t_0}^{t_0}\big(\int_{B_1(x_0)}|\partiali(x,t)|^2dx\big)^{\f12}dt\leq \varepsilon_6,
\end{eqnarray}o
then $u$ is regular in $Q_{\f1 2}(z_0)$.
\end{Proposition}
\begin{Remark}
Due to Lemma \ref{lem:pressure}, the above norm of the pressure can be replaced by $L^1(Q_1(z_0))$ norm.
A slightly different version of Proposition \ref{prop:small regularity-new} was obtained by Vasseur \cite{Vasseur},
who used the De Giorgi iterative method.
\end{Remark}
\noindent{\bf Proof.}\,By Proposition \ref{prop:small regularity-GKT} and translation invariance, it suffices to prove that
\begin{eqnarray}\label{eq:regular}
A(u,r)+E(u,r)\leq \varepsilon_6^\f12\le \varepsilon_1^2
\end{eqnarray}
for any $0<r<1/2$. Set $r_n=2^{-n}$, where $n=1,2,\cdots$.
First of all, (\ref{eq:regular}) holds for $r=r_1$ by local energy inequality.
Suppose that (\ref{eq:regular}) holds for $r_k$ with $k\leq n-1$. We need to show that
\begin{eqnarray}\label{eq:regular2}
A(u,r_n)+E(u,r_n)\leq \varepsilon_6^\f12.
\end{eqnarray}
Let $\partialhi_n=\chi\partialsi_n$, where $\chi$ is a cutoff function which equals $1$ in $Q_{1/4}$ and vanishes outside of $Q_{1/3}$,
and $\partialsi_n$ is as follows:
\begin{eqnarray}o
\partialsi_n=(r_n^2-t)^{-3/2}e^{-\frac{|x|^2}{4(r_n^2-t)}}.
\end{eqnarray}o
Direct computations show that $\partialhi_n\geq0$ and
$$(\partialartial_t+\triangle)\partialhi_n=0\quad \textrm{in}\quad Q_{1/4},$$
$$|(\partialartial_t+\triangle)\partialhi_n|\leq C_1\quad \textrm{in} \quad Q_{1/3},$$
$$C_1^{-1}r_n^{-3}\leq \partialhi_n\leq C_1 r_n^{-3},\quad |\nablabla\partialhi_n|\leq C_1r_n^{-4}\quad \textrm{on}\quad Q_{r_n} \quad n\geq 2,$$
$$\partialhi_n\leq C_1r_k^{-3},\quad |\nablabla\partialhi_n|\leq C_1r_k^{-4}\quad \textrm{on}\quad Q_{r_{k-1}}/{Q_{r_k}} \quad 1<k\leq n.$$
Using $\partialhi_n$ as a test function in the local energy inequality, we get
\begin{eqnarray}o
&&\sup_{-r_n^2<t<0}r_n^{-1}\int_{B_{r_n}}|u(x,t)|^2 dx+r_n^{-1}\int_{Q_{r_n}}|\nablabla u|^2dxdt\\
&&\leq C_1^2r_n^2\int_{Q_1}|u|^2dxdt+C_1r_n^2\int_{Q_1}|u|^3|\nablabla\partialhi_n|dxdt+C_1r_n^2\big|\int_{Q_1}\partiali(u\cdot\nablabla\partialhi_n)dxdt\big|\\
&&\buildrel\hbox{\footnotesize def}\over = I_1+I_2+I_3.
\end{eqnarray}o
Firstly, we have by assumption that
\begin{eqnarray}o
I_1\le C_1^2 r_n^2\varepsilon_6.
\end{eqnarray}o
Recall that the following well-known interpolation inequality from \cite{CKN}: for $\rho\ge r>0$
\begin{eqnarray}o
C(u,r)\leq C(\frac{\rho}{r})^3A(u,\rho)^{3/4}E(u,\rho)^{3/4}+C(\frac{r}{\rho})^{3}A(u,\rho)^{3/2},
\end{eqnarray}o
from which and the induction assumption, it follows that
\begin{eqnarray}o
I_2 &\leq& C_1^2r_n^2\sum_{k=1}^nr_k^{-4}\int_{Q_{r_k}}|u|^3dxdt\\
&\leq & Cr_n^2\sum_{k=1}^nr_k^{-2}\varepsilon_6^{3/4}
\leq C\varepsilon_6^{3/4}.
\end{eqnarray}o
To estimate $I_3$, we choose $\chi_k$ to be a cutoff function, which vanishes
outside of $Q_{r_k}$ and equals 1 in $Q_{7/8r_k}$, and $|\nablabla\chi_k|\leq Cr_k^{-1}$.
We have by the induction assumption that
\begin{eqnarray}o
I_3 &\leq & C_1r_n^2\sum_{k=1}^{n-1}\big|\int_{Q_{1}}\partiali(u\cdot\nablabla((\chi_k-\chi_{k+1})\partialhi_n))dxdt\big|
+C_1r_n^2\big|\int_{Q_{1}}\partiali u\cdot\nablabla(\chi_n\partialhi_n)dxdt\big|\\
&\leq & C_1r_n^2\sum_{k=1}^{n-1}\big|\int_{Q_{1}}(\partiali-(\partiali)_{B_k})u\cdot\nablabla((\chi_k-\chi_{k+1})\partialhi_n)dxdt\big|\\
&&+C_1r_n^2\big|\int_{Q_{1}}(\partiali-(\partiali)_{B_n})u\cdot\nablabla(\chi_n\partialhi_n)dxdt\big|\\
&\leq& Cr_n^2\sum_{k=3}^{n}r_k^{-4}\int_{Q_{r_k}}|(\partiali-(\partiali)_{B_k})u|dxdt+Cr_n^2\int_{Q_1}|u||\partiali|dxdt\\
&\leq& Cr_n^2\sum_{k=3}^{n}r_k^{-2}\varepsilon_6^{1/4}\widetilde{H}(\partiali,2,1;r_k)+C\varepsilon_6^{3/2},
\end{eqnarray}o
and by Lemma \ref{lem:pressure} and interpolation inequality, we get
\begin{eqnarray}o
\widetilde{H}(\partiali,2,1;\theta^j)&\leq& C\theta\widetilde{H}(\partiali,1,1;\theta^{j-1})+C\theta^{-\frac32}G(u,4,2;\theta^{j-1})^2\\
&\leq& (C\theta)^j\widetilde{H}(\partiali,1,1;1)+C\theta^{-\frac32}\sum_{\ell=1}^j(C\theta)^{\ell-1}G(u,4,2;\theta^{j-\ell})^2\\
&\leq &(C\theta)^j\varepsilon_6+C\theta^{-\frac32}\sum_{l=1}^j(C\theta)^{\ell-1}\varepsilon_6^\f12\\
&\leq & C\varepsilon_6^\f12,
\end{eqnarray}o
where we take $\theta$ such that $C\theta<\f12$ and $j$ satisfies $\theta^j\geq r_{n}$.
This gives
\begin{eqnarray}o
I_3\leq C\varepsilon_6^{3/4}.
\end{eqnarray}o
Summing up the estimates for $I_1-I_3$ and taking $\varepsilon_6$ small enough, we conclude (\ref{eq:regular2}).
\hphantom{MM}
\llap{$\square$}\goodbreak
\subsection{The number of singular points}
\begin{Theorem}\label{thm:singular point}
Let $u$ be a Leray-Hopf weak solution in $\Bbb R^3\times(-1,0)$ and satisfy
\begin{eqnarray}\label{eq:6.3}
\|u\|_{L^{q,\infty}(-1,0; L^{p}(\Bbb R^3))}=M<\infty,
\end{eqnarray}
where $\frac3p+\frac2q=1$, $3<p<\infty$.
Then the number of singular points of $u$ is finite at any time $t\in (-1,0]$, and the number depends on $M$.
\end{Theorem}
\begin{Remark}
The case of $(p,q)=(3,\infty)$ has been proved by Neustupa \cite{Ne} and Seregin \cite{Seregin-CPAM}.
In fact, the solution is regular in this case \cite{ESS}.
A special case satisfying (\ref{eq:6.3}) is
\begin{eqnarray}o
\|u(t)\|_{L^p(\Bbb R^3)}\leq M(-t)^{\frac{3-p}{2p}}.
\end{eqnarray}o
Note that the solution is regular if $M$ is small, which was proved by Leray \cite{Leray}.
\end{Remark}
\begin{Lemma}\label{lem:regular2}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$ and satisfy
\begin{eqnarray}\label{eq:6.4}
\|u\|_{L^{q,\infty}(-1,0; L^{p}(B_1))}<M,
\end{eqnarray}
where $\frac3p+\frac2q=1$ and $3<p<\infty$.
There exists $\varepsilon_7>0$ depending on $M, C(u,1), D(\partiali,1)$ such that $u$ is regular at $(0,0)$ if
\begin{eqnarray}\label{eq:6.5}
\|u\|^2_{L_t^{q_0}L_x^p}(Q_1)+\|\partiali\|_{L_t^{q_0/2}L_x^{p/2}}(Q_1)\leq \varepsilon_7.
\end{eqnarray}
where $q_0=3$ for $3<p<9$ and $q_0=\frac{q+2}{2}$ for $p\geq 9$.
\end{Lemma}
\noindent{\bf Proof.}
For $p\in (3,9)$, the result follows from Proposition \ref{prop:small regularity-CKN} and (\ref{eq:6.5}).
Now we assume $p\geq 9$.
Similar to the proof of Lemma \ref{lem:local bound}, we can infer from (\ref{eq:6.4}) that
\begin{eqnarray}\label{eq:6.6}
G(u,p,q_0;r)\leq C(M) \quad\textrm{ for any }0<r<1,
\end{eqnarray}
which along with Lemma \ref{lem:invariant} gives
\begin{eqnarray}\label{eq:16}
A(u,r)+E(u,r)+D(\partiali,r)\leq C(M, C(u,1),D(\partiali,1))\quad\textrm{ for any } 0<r<1/2.
\end{eqnarray}
Due to $p\ge 9$, hence $q_0>2$, H\"{o}lder inequality gives
\begin{eqnarray}\label{eq:5.20}
G(u,4,2;r)\le CG(u,p,q_0;r).
\end{eqnarray}
Let $\zeta$ be a cutoff function, which vanishes
outside of $Q_{\rho}$ and equals 1 in $Q_{\rho/2}$, and satisfies
$$|\nablabla\zeta|\leq C_1\rho^{-1},\quad |\partialartial_t\zeta|,|\triangle\zeta|\leq C_1\rho^{-2}.$$
Define the backward heat kernel as
$$\Gamma(x,t)=\frac{1}{4\partiali(r^2-t)^{3/2}}e^{-\frac{|x|^2}{4(r^2-t)}}.$$
Taking the test function $\partialhi=\Gamma\zeta$ in the local energy inequality, and noting $(\partialartial_t+\triangle)\Gamma=0$,
we obtain
\begin{eqnarray}o
\sup_t\int_{B_{\rho}}|u|^2\partialhi dx+\int_{Q_{\rho}}|\nablabla u|^2\partialhi dxdt
\leq \int_{Q_{\rho}}\big(|u|^2(\triangle\partialhi+\partialartial_t\partialhi)+u\cdot\nablabla\partialhi(|u|^2+\partiali)\big)dxdt.
\end{eqnarray}o
This implies that
\begin{eqnarray}o
A(u,r)+E(u,r)\leq C(\frac{r}{\rho})^2\Big(\rho^{-3}\int_{Q_{\rho}}|u|^2dxdt+C(u,\rho)+
\rho^{-2}|u||\partiali-(\partiali)_{B_{\rho}}|dxdt\Big).
\end{eqnarray}o
While by (\ref{eq:6.6}) and (\ref{eq:5.20}), we have
$$C(u,\rho)\leq A(u,\rho)^{1/2}G(u,4,2;\rho)^2\leq C(M)A(u,\rho)^{1/2}.$$
And we get by Lemma 6.1 that
\begin{eqnarray}o
\widetilde{H}(\partiali,2,1;r)&\leq& C(\frac{\rho}{r})^{\frac{3}{2}} G(u,4,2;\rho)^2+C (\frac{r}{\rho})\widetilde{H}(\partiali,1,1;\rho)\\
&\leq& C(M)(\frac{\rho}{r})^{\frac{3}{2}} +C(\frac{r}{\rho})\widetilde{H}(\partiali,1,1;\rho),
\end{eqnarray}o
which gives by a standard iteration that
\begin{eqnarray}o
\tilde{H}(\partiali,2,1;r)\leq C(M)\quad\textrm{for } 0<r<1/2.
\end{eqnarray}o
Hence, we have
$$
\rho^{-2}\int_{Q_{\rho}}|u||\partiali-\partiali_{B_{\rho}}|dxdt\leq C A(u,\rho)^{1/2}\tilde{H}(\partiali,2,1;\rho)
\le C(M)A(u,\rho)^{1/2}.
$$
Let $F(r)=A(u,r)+E(u,r)+\tilde{H}(\partiali,2,1;r)^2$. Then we conclude
\begin{eqnarray}\label{eq:17}
F(r)\leq C (\frac{r}{\rho})^2F(\rho)+C(M)(\frac{r}{\rho})^2+C(\frac{\rho}{r})^{3} G(u,4,2;\rho)^4.
\end{eqnarray}
Letting $\rho=1$, and taking $r$ small and then $\varepsilon_7$ small,
we infer from (\ref{eq:17}), (\ref{eq:16}) and Proposition \ref{prop:small regularity-new} that $(0,0)$ is a regular point.
The proof is completed.\hphantom{MM}
\llap{$\square$}\goodbreak
Now we are in position to prove Theorem \ref{thm:singular point}.
\noindent\textbf{Proof of Theorem \ref{thm:singular point}}.
We denote $z_1=(x_1, t_0),\cdots,z_K=(x_K,t_0)$ by the singular points of the solution at $t=t_0$.
Then Lemma \ref{lem:regular2} implies that at every singular point we have
\begin{eqnarray}o
G(u,p,q_0;r,z_l)^2+H(\partiali,p/2,q_0/2;r,z_l)> \varepsilon_7,\quad \textrm{for any }0<r<1,
\end{eqnarray}o
where $l=1,\cdots,K$. We choose $r_0>0$ small such that $B_r(x_i)\cap B_r(x_j)=\emptyset$ for $i\neq j$ and all $0<r\leq r_0$.
Taking $r=\theta^kr_0$ and $\rho=\theta^{k-1}r_0$ in (\ref{eq:pressure1}), we find
\begin{eqnarray}o
&&H(\partiali,p/2,q_0/2;\theta^kr_0)^{q_0/2}\\
&&\leq C\theta^{q_0-2}H(\partiali,p/2,q_0/2;\theta^{k-1}r_0)^{q_0/2}+C\theta^{-2-\frac{3q_0}p+q_0} G(u,p,q_0;\theta^{k-1}r_0)^{q_0}\\
&&\leq (C\theta^{q_0-2})^kH(\partiali,p/2,q_0/2;r_0)^{q_0/2}
+C\theta^{-2-\frac{3q_0}p+{q_0}}\sum_{i=0}^{k-1}(C\theta^{q_0-2})^{k-i-1}G(u,p,q_0;\theta^{i}r_0)^{q_0}.
\end{eqnarray}o
Now, for $\frac{q_0}{p}\geq 1$, noting that
$$
\sum_{l=1}^Ka_l^{\f {q_0} p}\le \big(\sum_{l=1}^Ka_l\big)^{\f {q_0} p},\quad a_l\ge 0,
$$
we deduce by (\ref{eq:lorentz}) with $R=r^{-2/q}\|u\|_{L^{q,\infty}(-1,0;L^p(\Bbb R^3))}$ that
\begin{eqnarray}o
\varepsilon_7^{\f {q_0} 2}K&\leq& C\sum_{l=1}^K\big(G(u,p,q_0;r,z_l)^{q_0}+H(\partiali,p/2,q_0/2;r,z_l)^{q_0/2}\big)\\
&\le& Cr^{\al}\int_{t_0-r^2}^{t_0}\big(\int_{\Om}|u|^pdx\big)^{q_0/p}dt+C(C\theta^{q_0-2})^kr^{\be}\int_{t_0-r^2}^{t_0}\big(\int_{\Om}|\partiali|^{\f p2}dx\big)^{q_0/p}dt\\
&&\quad+C\theta^{-2-\frac{3q_0}p+ {q_0} }\sum_{i=0}^{k-1}(C\theta^{q_0-2})^{k-i-1}r^{\al}\int_{t_0-r^2}^{t_0}\big(\int_{\Om}|u|^pdx\big)^{q_0/p}dt\\
&\leq& C\|u\|_{L^{q,\infty}(-1,0;L^p(\Bbb R^3))}^{q_0}+C(C\theta^{q_0-2})^k\|\partiali\|_{L^{q/2,\infty}(-1,0;L^{p/2}(\Om))}^{q_0/2},
\end{eqnarray}o
where $\Om=\cup_{l=1}^{K}B_{r_0}(x_l), \al=(1-\f 3 p-\f 2{q_0} )q_0, \be=(2-\f 3 p-\f 2{q_0} )$,
and choose $\theta$ such that $C\theta^{2-\frac{4}{q_0}}<\frac12$.
Letting $k\rightarrow\infty$, we infer that
$$K\leq C\varepsilon_7^{-\f {q_0}2}\|u\|_{L^{q,\infty}(-1,0;L^p(\Bbb R^3))}^{q_0}.$$
Similarly, for $\frac{q_0}{p}<1$, noting that
$$
\sum_{l=1}^Ka_l^{\f {q_0} p}\le K^{1-\f {q_0} p}\big(\sum_{l=1}^Ka_l\big)^{\f {q_0} p},\quad a_l\ge 0,
$$
we infer that
\begin{eqnarray}o
\varepsilon_7^{\f {q_0} 2}K
&\leq& C\sum_{l=1}^K\Big(r^{\al}\int_{t_0-r^2}^{t_0}\big(\int_{B_r(x_l)}|u|^pdx\big)^{q_0/p}dt+(C\theta^{q_0-2})^kr^{\be}\int_{t_0-r^2}^{t_0}\big(\int_{B_r(x_l)}|\partiali|^{\f p2}dx\big)^{q_0/p}dt\\
&&+C\theta^{q_0-\frac{3q_0}p-2}\sum_{i=0}^{k-1}(C\theta^{q_0-2})^{k-i-1}r^{\al}\int_{t_0-r^2}^{t_0}\big(\int_{B_r(x_l)}|u|^pdx\big)^{q_0/p}dt\Big)\\
&\leq& CK^{1-\f{q_0}p}\Big(r^{\al}\int_{t_0-r^2}^{t_0}(\int_{\Om}|u|^pdx)^{q_0/p}dt+(C\theta^{q_0-2})^kr^{\be}\int_{t_0-r^2}^{t_0}(\int_{\Omega}|\partiali|^{p/2}dx)^{q_0/p}dt\Big)\\
&\le& CK^{1-\f{q_0}p}\Big(\|u\|_{L^{q,\infty}(-1,0;L^p(\Bbb R^3))}^{q_0}
+(C\theta^{q_0-2})^k\|\partiali\|_{L^{q/2,\infty}(-1,0;L^{p/2}(\Om))}^{q_0/2}\Big).
\end{eqnarray}o
Letting $k\rightarrow\infty$, we get
$$K\leq C\varepsilon_7^{-\f p 2}\|u\|_{L^{q,\infty}(-1,0;L^p(\Bbb R^3))}^{p}.$$
The proof is completed.\hphantom{MM}
\llap{$\square$}\goodbreak
\section{Appendix}
We first present some estimates of the pressure in terms of some scaling invariant
quantities.
\begin{Lemma}\label{lem:pressure}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$. Then there hold
\begin{eqnarray}o
&&H(\partiali,2,1;r)\leq C(\frac{\rho}{r})^{\frac32} G(u,4,2;\rho)^2+CH(\partiali,1,1;\rho),\\
&&\widetilde{H}(\partiali,2,1;r)\leq C(\frac{\rho}{r})^{\f
32}G(u,4,2;\rho)^2+C(\frac{r}{\rho})\widetilde{H}(\partiali,1,1;\rho),
\end{eqnarray}o
for any $0<4r<\rho<1$. Here $C$ is a constant independent of $r,\rho$.
\end{Lemma}
\noindent{\bf Proof.} We write $\partiali=\partiali_1+\partiali_2$ with $\partiali_1$ satisfying
\begin{eqnarray}o
\triangle \partiali_1=-\partialartial_i\partialartial_j(u_iu_j\zeta),
\end{eqnarray}o
where $\zeta$ is a cut-off function , which equals 1 in $B_{\rho/2}$ and vanishes outside of $B_{\rho}$. Hence,
\begin{eqnarray}o
\triangle \partiali_2=0\quad {\rm in}\quad B_{\rho/2}.
\end{eqnarray}o
By Calder\'{o}n-Zygmund inequality, we have
\begin{eqnarray}o
\int_{B_{\rho}}|\partiali_1|^2dx\leq C\int_{B_{\rho}}|u|^4dx,
\end{eqnarray}o
and using the properties of harmonic function, for $r<\rho/4$
\begin{eqnarray}o
&&\sup_{x\in B_r}|\partiali_2|\leq C\rho^{-3}\int_{B_{\rho/4}}|\partiali_2|dx,\\
&&\sup_{x\in B_r}|\partiali_2-(\partiali_2)_{B_r}|\leq Cr\sup_{x\in B_{\rho/4}}|\nablabla \partiali_2|
\leq C(\frac{r}{\rho})\rho^{-3}\int_{B_{\rho}}|\partiali_2-(\partiali_2)_{B_{\rho}}|dx.
\end{eqnarray}o
Then it follows that for $0<r<\rho/4$,
\begin{eqnarray}o
\int_{B_{r}}|\partiali|^2dx
&\leq&\int_{B_{r}}|\partiali_1|^2dx+\int_{B_{r}}|\partiali_2|^2dx\\
&\leq& C\int_{B_{\rho}}|u|^4dx+Cr^3\rho^{-6}\Big(\int_{B_{\rho}}|\partiali|dx\Big)^2,
\end{eqnarray}o
and
\begin{eqnarray}o
\int_{B_{r}}|\partiali-(\partiali)_{B_r}|^2dx
&\leq&\int_{B_{r}}|\partiali_1-(\partiali_1)_{B_r}|^2dx+\int_{B_{r}}|\partiali_2-(\partiali_2)_{B_r}|^2dx\\
&\leq&C\int_{B_{\rho}}|u|^4dx+Cr^5\rho^{-8}\Big(\int_{B_{\rho}}|\partiali-(\partiali)_{B_{\rho}}|dx\Big)^2.
\end{eqnarray}o
Integrating with respect to $t$, we get
\begin{eqnarray}o
\int_{-r^2}^0\Big(\int_{B_{r}}|\partiali|^2dx\Big)^{\frac{1}{2}}dt
\leq C\int_{-\rho^2}^0\Big(\int_{B_{\rho}}|u|^4dx\Big)^{\frac{1}{2}}dt
+Cr^{\frac32}\rho^{-3}\int_{-\rho^2}^0\int_{B_{\rho}}|\partiali|dxdt,
\end{eqnarray}o
and
\begin{eqnarray}o
&&\int_{-r^2}^0\Big(\int_{B_{r}}|\partiali-(\partiali)_{B_r}|^2dx\Big)^{\frac{1}{2}}dt\\
&&\leq C\int_{-\rho^2}^0\Big(\int_{B_{\rho}}|u|^4dx\Big)^{\frac{1}{2}}dt
+r^{\frac52}\rho^{-4}\int_{-\rho^2}^0\int_{B_{\rho}}|\partiali-(\partiali)_{B_{\rho}}|dxdt.
\end{eqnarray}o
The proof is completed.\hphantom{MM}
\llap{$\square$}\goodbreak
The same proof also yields that for any $0<4r<\rho<1$,
\begin{eqnarray}\label{eq:pressure1}
H(\partiali,p/2,q/2;r)\leq C\big(\frac{\rho}{r}\big)^{\frac{4}{q}+\frac6p-2}G(u,p,q;\rho)^2
+C\big(\frac{r}{\rho}\big)^{2-\frac{4}{q}}H(\partiali,1,q/2;\rho),
\end{eqnarray}
where $p>2, q\ge 2$.
Similarly, one can show that (see also \cite{Seregin-JMS})
\begin{eqnarray}\label{eq:pressure}
\widetilde{D}(\partiali,r)\leq C\big((\frac{r}{\rho})^{5/2}\widetilde{D}(\partiali,\rho)+(\frac{\rho}{r})^{2}C(u,\rho)\big),
\end{eqnarray}
for any $0<4r<\rho<1.$
The following lemma gives a bound of local scaling invariant energy, see also \cite{GKT} and \cite{ZS}.
\begin{Lemma}\label{lem:invariant}
Let $(u,\partiali)$ be a suitable weak solution of (\ref{eq:NS}) in $Q_1$. If
\begin{eqnarray}o
&&G(u,p,q;r)\leq M\quad\textrm{ with }\quad1\le\frac3p+\frac2q<2, 1<q\leq \infty\quad \textrm{or}\\
&&H(\nablabla u, p, q;r)\leq M\quad\textrm{ with }\quad 2\le \frac3{p}+\frac2{q}<3, 1<p\le\infty,
\end{eqnarray}o
for any $0<r<1$, then there holds for $0<r<1/2$
\begin{eqnarray}o
A(u,r)+E(u,r)+D(\partiali,r)\leq C(p,q,M)\big(r^{1/2}\big(C(u,1)+D(\partiali,1)\big)+1\big).
\end{eqnarray}o
\end{Lemma}
\noindent{\bf Proof.}\,First of all, we assume $G(u,p,q;r)\leq M$ and moreover,
\begin{eqnarray}o
\f32<\frac3p+\frac2q<2,\quad \frac3p+\frac3q\geq 2,\quad \frac4p+\frac2q\geq 2,\quad p,q<\infty.
\end{eqnarray}o
Otherwise, we can choose $(p_1,q_1)$ satisfying the above condition and by H\"{o}lder inequality,
\begin{eqnarray}o
G(u,p_1,q_1;r)\le CG(u,p,q;r).
\end{eqnarray}o
By H\"{o}lder inequality and Sobolev inequality, we get
\begin{eqnarray}o
\int_{B_r}|u|^3dx &= &\int_{B_r}|u|^{3\alpha+3\beta+3-3\alpha-3\beta}dx\\
&\leq & \big(\int_{B_r}|u|^2dx\big)^{3\alpha/2}\big(\int_{B_r}|u|^6dx\big)^{\beta/2}
\big(\int_{B_r}|u|^pdx\big)^{{(3-3\alpha-3\beta)}/p}\\
&\leq & C\big(\int_{B_r}|u|^2dx\big)^{3\alpha/2}\big(\int_{B_r}|\nablabla u|^2+|u|^2dx\big)^{3\beta/2}
\big(\int_{B_r}|u|^pdx\big)^{{(3-3\alpha-3\beta)}/p},
\end{eqnarray}o
where $\alpha, \beta\geq 0$ are chosen such that
\begin{eqnarray}o
\frac13=\frac{\alpha}{2}+\frac{\beta}{6}+\frac{1-\alpha-\beta}{p},\quad
1=\frac{3\beta}{2}+\frac{3-3\alpha-3\beta}{q}.
\end{eqnarray}o
That is,
\begin{eqnarray}o
\alpha=\frac{2(\frac3p+\frac3q-2)}{3(\frac6p+\frac4q-3)},\quad \beta=\frac{\frac4p+\frac2q-2}{\frac6p+\frac4q-3}.
\end{eqnarray}o
Integrating with respect to time, we get
\begin{eqnarray}o
\int_{Q_r}|u|^3dxdt
&\leq& C\big(\sup_{-r^2<t<0}\int_{B_r}|u|^2dx\big)^{\frac{3\alpha}{2}}
\big(\int_{Q_r}|\nablabla u|^2+|u|^2dxdt\big)^{\f {3\beta}{2}}\\
&&\quad\times \Big(\int_{-r^2}^0\big(\int_{B_r}|u|^pdx\big)^{\f q p}dt\Big)^{\f {3-3\alpha-3\beta} q},
\end{eqnarray}o
this means that
\begin{eqnarray}o
C(u,r)\leq C\big(A(u,r)+E(u,r)\big)^{\f {3\alpha+3\beta}{2}}G(u,p,q;r)^{3-3\alpha-3\beta}.
\end{eqnarray}o
Set $\frac3p+\frac2q=2-\delta$ with $0\leq\delta<1/2$.
Then $\f {3\alpha+3\beta}{2}=\frac32-\frac{1}{2(\frac6p+\frac4q-3)}=\frac{1-3\delta}{1-2\delta}$ and
\begin{eqnarray}\label{eq:7.9}
C(u,r)\leq C\big(A(u,r)+E(u,r)\big)^{\frac{1-3\delta}{1-2\delta}}G(u,p,q;r)^{\frac{1}{1-2\delta}}.
\end{eqnarray}
By the assumption, we get
\begin{eqnarray}o
C(u,r)\leq C(p,q,M)\big(A(u,r)+E(u,r)\big)^{\frac{1-3\delta}{1-2\delta}}.
\end{eqnarray}o
Using the local energy inequality and (\ref{eq:pressure1}), we deduce that
\begin{eqnarray}o
&&A(u,r)+E(u,r)\leq C\big(C(u,2r)^{2/3}+C(u,2r)+C(u,2r)^{1/3}D(\partiali,2r)^{2/3}\big),\\
&&D(\partiali,r)\leq C\big((\frac{r}{\rho})D(\partiali,\rho)+(\frac{\rho}{r})^{2}C(u,\rho)\big)\quad \textrm{for}\quad 0<4r<\rho<1.
\end{eqnarray}o
Set $F(r)=A(u,r)+E(u,r)+D(\partiali,r)$. It follows from the above three inequalities that
\begin{eqnarray}o
F(r)&\leq& C\big(1+C(u,2r)+D(\partiali,2r)\big)\\
&\leq& C+C(\frac{r}{\rho})F(\rho)+
C(p,q,M)\big((\frac{\rho}{r})^{2}+(\frac{\rho}{r})^{\frac{1-3\delta}{1-2\delta}}\big)\big(A(u,\rho)+E(u,\rho)\big)^{\frac{1-3\delta}{1-2\delta}}\\
&\leq& C+C(\frac{r}{\rho})F(\rho)+C(p,q,M,\frac{\rho}{r})
\end{eqnarray}o
for $0<8r<\rho<1$. By the standard iteration and local energy inequality, we deduce that
\begin{eqnarray}o
F(r)&\leq& C(p,q,M)\big(r^{1/2}(A(u,1/2)+E(u,1/2)+D(\partiali,1))+1\big)\\
&\leq& C(p,q,M)\big(r^{1/2}(C(u,1)+D(\partiali,1))+1\big).
\end{eqnarray}o
Now Let us assume that $H(\nablabla u, p, q;r)\leq M$ and $\f52<\frac3{p}+\frac2{q}<3, p<3$.
General case can be reduced to this case as above. Similarly, we have
\begin{eqnarray}o
&&\int_{Q_r}|u-u_{B_r}|^3dxdt\\
&&\leq C\big(\sup_{-r^2<t<0}\int_{B_r}|u|^2dx\big)^{\frac{3\alpha}{2}}
\big(\int_{Q_r}|\nablabla u|^2dxdt\big)^{\f {3\beta}{2}}\\
&&\quad\times\Big(\int_{-r^2}^0\big(\int_{B_r}|u-u_{B_r}|^{\frac{3p}{3-p}}dx\big)^{\f {q(3-p)}{3p} }dt\Big)^{\f {3-3\alpha-3\beta} {q}}\\
&&\leq C\big(\sup_{-r^2<t<0}\int_{B_r}|u|^2dx\big)^{\frac{3\alpha}{2}}
\big(\int_{Q_r}|\nablabla u|^2dxdt\big)^{\f {3\beta}{2}}
\Big(\int_{-r^2}^0\big(\int_{B_r}|\nablabla u|^{p}dx\big)^{\f {q}{p} }dt\Big)^{\f {3-3\alpha-3\beta} {q}},
\end{eqnarray}o
where $\alpha+\beta=1-\frac{1}{3(\frac{6}{p}+\frac4{q}-5)}$. Let $\frac{3}{p}+\frac2{q}=3-\delta_0$ with
$0\leq \delta_0<\frac12$, then
\begin{eqnarray}o
\widetilde{C}(u,r)&\leq& C\big(A(u,r)+E(u,r)\big)^{\frac{1-3\delta_0}{1-2\delta_0}}H(\nablabla u,p,q;r)^{\frac{1}{1-2\delta_0}}\\
&\leq& C(p,q,M)\big(A(u,r)+E(u,r)\big)^{\frac{1-3\delta_0}{1-2\delta_0}}.
\end{eqnarray}o
Note that
\begin{eqnarray}o
C(u,r)\leq C\big((\frac{r}{\rho})C(u,\rho)+(\frac{\rho}{r})^{2}\tilde{C}(u,\rho)\big),
\end{eqnarray}o
and
\begin{eqnarray}o
&&A(u,r)+E(u,r)\leq C\big(C(u,2r)^{2/3}+C(u,2r)+C(u,2r)^{1/3}D(\partiali,2r)^{2/3}\big),\\
&&D(\partiali,r)\leq C\big((\frac{r}{\rho})D(\partiali,\rho)+(\frac{\rho}{r})^{2}\widetilde{C}(u,\rho)\big),
\end{eqnarray}o
for $0<4r<\rho<1$. Let $F(r)=A(u,r)+E(u,r)+C(u,r)+D(\partiali,r)$. Then we have
\begin{eqnarray}o
F(r)&\leq& C\big(1+C(u,2r)+D(\partiali,2r)\big)\\
&\leq& C\big(1+(\frac{r}{\rho})C(u,\rho)+(\frac{\rho}{r})^{2}\widetilde{C}(u,\rho)
+(\frac{r}{\rho})D(\partiali,\rho)\big)\\
&\leq& C\big(1+(\frac{r}{\rho})F(\rho)+(\frac{\rho}{r})^{2}F(\rho)^{\frac{1-3\delta_0}{1-2\delta_0}}\big)\\
&\leq& C+C(\frac{r}{\rho})F(\rho)+C(p,q,M,\frac{\rho}{r}),
\end{eqnarray}o
which implies the required result.\hphantom{MM}
\llap{$\square$}\goodbreak
\noindentindent {\bf Acknowledgments.}
Both authors thank helpful discussions with Professors Gang Tian and Liqun Zhang.
Zhifei Zhang is partly supported by NSF of China under Grant 10990013 and 11071007.
\end{document}
|
math
|
\begin{document}
\title{$k$-Sum Decomposition of Strongly Unimodular Matrices}
\author{K. Papalamprou and L. Pitsoulis\thanks{work of this author was conducted at National Research University Higher School of Economics and supported by RSF grant 14-41-00039} \\
Department of Electrical and Computer Engineering \\
Aristotle University of Thessaloniki, Greece \\
\texttt{papalamprou@auth.gr, pitsouli@auth.gr} }
\maketitle
\begin{abstract}
Networks are frequently studied algebraically through matrices. In this work, we show that networks may be studied in a more abstract level using results from the theory of matroids by establishing connections to networks by decomposition results of matroids. First, we present the implications
of the decomposition of regular matroids to networks and related classes of matrices, and secondly we show that strongly unimodular matrices are closed under $k$-sums for $k=1,2$ implying a decomposition into highly connected network-representing blocks, which are also shown to have a special structure.
\end{abstract}
\section{Introduction}
It is widely accepted that networks play an important role in many aspects of today's life. To name a few, social networks play an important role in relationships, job hunting, and marketing, while economic networks usually determine the sustainability and development of various organisations.
Moreover, the understanding of complex biological networks may be the key in answering important questions in the areas of medicine and biology. For many other applications as well as for an extended overview of the approaches related to complex networks, the interested reader is referred to~\cite{EasKle:10, Jack:10,New:2010}
Networks are naturally modelled as graphs and results from graph theory have been employed to explore and attack problems in networks (see e.g.~\cite{Steen:10}). Graphs are known to be represented algebraically via matrices and it is such a representation that has been extensively used in various problems concerning networks. In this work, we examine special classes of matrices that are related to networks and furthermore have important implications in optimisation. Our primary result shows that these matrices and, therefore the related networks, are decomposed into highly connected blocks which represent networks with specific properties. To do so we employ results from matroid decomposition theory and, to the best of our knowledge, this is among the few works using such tools to study complex networks. The purpose of this work is twofold; at the one hand we would like to relate complex networks with optimisation problems via well-known classes of matrices and on the other hand to present decomposition results for such classes of matrices and discuss their implication to networks. From our viewpoint, the main implication is the possibility of finding a way to study complex networks via exploring the properties of the building blocks that arise from specific decompositions.
The organization of the paper is as follows.
In Section~\ref{sec_pre}, we provide the relevant theory and some preliminary results regarding matrices and matroids in order to
make this work more self-contained.
In Section~\ref{sec_ksm}, we focus on strongly unimodular matrices and show that they are closed under the $k$-sum operations $(k=1,2)$ and, based on that, how these matrices can be decomposed into smaller strongly unimodular matrices. The special structure of these smaller matrices is discussed in
Section~\ref{sec_3c}. In the last section, the final decomposition result is provided along with the description of the associated highly connected building blocks.
\section{Special Matrices and Matroids} \label{sec_pre}
\subsection{Network and Unimodular Matrices}
We assume that the reader is familiar with the basic notions of graph theory as they are presented in~\cite{Diestel:05}.
Totally unimodular (TU) matrices form an important class of matrices for integer and linear programming due to the integrality properties of
the associated polyhedron. A matrix $A$ is \emph{totally unimodular} if each square submatrix of $A$ has determinant $0,+1,$ or $-1$. The class of TU matrices has been studied extensively and combinatorial characterisations for these matrices can be found in~\cite{NemWols:1988,Schrijver:86}. An important subclass of TU matrices is defined as follows.
A matrix $A$ is \emph{strongly unimodular} (SU) if: (i) $A$ is TU, and (ii) every matrix obtained from $A$ setting a $\pm{1}$ entry to $0$ is also TU. Another well-known characterisation for SU matrices goes as follows:
a matrix is strongly unimodular if any of its nonsingular submatrices is triangular, where a triangular matrix is a square matrix whose entries below or above the main diagonal can become zero by permutation of rows or column.
Strongly unimodular matrices have appeared several times in the literature \cite{Cora:87,CraLoPo:92,LoPo:89} since they were first introduced in \cite{CraHaIb:86}. Another subclass of TU matrices discussed in this paper is the class of network matrices. A \emph{network matrix} may be viewed as an edge-path matrix of a directed graph with respect to a particular spanning tree of the graph; results regarding network matrices can be found in~\cite{NemWols:1988,Schrijver:86}. Seymour has shown in~\cite{Seymour:1980} that network matrices and their transposes are the main building blocks for TU matrices. Moreover, in~\cite{PiPa:09}, it has been shown that the building blocks of TU matrices are matrices associated with bidirected graphs. In this paper we focus on SU matrices which stand between the classes of network and TU matrices and show the network structure of that class.
\subsection{Matroid Theory}
The main reference for matroid theory is the book of Oxley~\cite{Oxley:06}.
\begin{definition} \label{def_ntef}
A matroid $M$ is an ordered pair $(E,\mathcal{I})$ of a finite set $E$ and a collection $\mathcal{I}$ of subsets of $E$ satisfying the following three conditions:
\begin{itemize}
\item [(I1)] $\emptyset \in{\mathcal{I}}$
\item[(I2)] If $X\in{\mathcal{I}}$ and $Y\subseteq{X}$ then $Y\in{\mathcal{I}}$
\item[(I3)] If $U$ and $V$ are members of $\mathcal{I}$ with $|U|<|V|$ then there exists $x\in{V-U}$ such that $U\cup{x}\in{\mathcal{I}}$.
\end{itemize}
\end{definition}
Given a matroid $M=(E,\mathcal{I})$, the set $E$ is called the \emph{ground set} of $M$ and the members of $\mathcal{I}$ are the \emph{independent sets} of $M$. Furthermore, any subset of $E$ not in $\mathcal{I}$ is called a $\emph{dependent set}$ of $M$ while a minimal dependent set is called a \emph{circuit} of $M$.
Let $E$ be a finite set of vectors from a vector space over a field $\mathbb{F}$ and let $\mathcal{I}$ be the collection of linearly independent subsets of $E$; then it can be proved that $M=(E,\mathcal{I})$ is a matroid called \emph{vector matroid} denoted by $M[A]$ where $A$ is a matrix whose columns are the vectors of the ground set. It can be easily shown that there is one-to-one correspondence between the linearly independent columns of $A$ and the independent sets of $M$, so the matroid $M$ can be fully characterised by matrix $A$. Matrix $A$ is called a \emph{representation matrix} of $M$ and we also say that $M$ is $\mathbb{F}$-representable where $\mathbb{F}$ is the field that the elements of matrix $A$ belong. Suppose now that we delete from $A$ all the linearly dependent rows and from the matrix $A'$ so-obtained we choose a basis $B$. Clearly, linear $\mathbb{F}$-independence of columns is not affected by such a deletion of rows. By pivoting on non-zero elements of $B$ we can transform $A'$ to matrix $[I\;B']$. Pivoting does not affect linear $\mathbb{F}$-independence of a matrix and, thus, $M=M[I\;B']$. The matrix $B'$ is called a \emph{compact representation matrix} of $M$. Two matrices are \emph{projectively equivalent} if one can be obtained from the other by elementary row operations and nonzero column scaling. A matroid $M$ is called \emph{uniquely representable} over some field $\mathbb{F}$ if and only if any two representation matrices of $M$ (over $\mathbb{F}$) are projectively equivalen. A matroid representable over every field is \emph{regular}. Furthermore, there is a clear connection between regular matroids and TU matrices. Specifically, any TU matrix is the representation matrix of some regular matroid and any regular matroid has a TU representation matrix (in $\mathbb{R}$).
Let $G$ be an ordinary graph and let $\mathcal{I}$ be the collection of edge sets inducing a acyclic subgraph of $G$. Then it can be shown that the pair $(E(G), \mathcal{I})$ is a matroid called the \emph{graphic matroid} of $G$ and is denoted by $M(G)$. If $A$ is the incidence matrix of an orientation of $G$ (i.e. the directed graph obtained from $G$ by assigning a direction to each edge) then it can be shown that $M(G)$ is isomorphic to $M[A]$ and we write $M(G)\cong{M[A]}$. Thus, for any network matrix $N$ with respect to some
spanning tree of $G$ we have that $M(G)\cong{M(N)}$, since the way we obtain $N$ from $A$ is also the way we can obtain from $A$ a compact representation matrix of $M[A]$.
The ordered pair $(E,\{E-{S}:S \notin{\mathcal{I}}\})$ is a matroid called the \emph{dual matroid} of $M$ and is denoted by $M^{*}$. It is clear that $(M^{*})^{*}=M$. The prefix 'co' is used to dualize a term; therefore, a matroid is called cographic if it is the dual of a graphic matroid. We should note that not all matroids are closed under duality; for example regular matroids are closed while graphic matroids are not.
Any matroid which can be obtained from $M$ by a series of operations called {\em deletions} and {\em contractions} is called a \emph{minor} of $M$ (see e.g. Section~3.1 in~\cite{Oxley:06}). The \emph{rank} of a matroid $M$, denoted by $r(M)$, equals the cardinality of the maximal independent set of $M$. For some positive integer $k$, a partition $(X,Y)$ of $E(M)$ is called a \emph{$k$-separation} of $M$ if the following two conditions are satisfied: (i) $\min\{|X|,|Y|\}\geq k$, and (ii) $r_{M}(X)+r_{M}(Y)-r(M) \leq k-1$.
Finally, we say that $M$ is \emph{$k$-connected} when it does not have an $l$-separation for $1\leq l \leq k-1$.
\section{A $k$-sum Decomposition of Strongly Unimodular Matrices} \label{sec_ksm}
The following two results (Lemmas~\ref{lem_ew} and~\ref{lem_su22}) can be obtained easily from the definition of SU matrices and the fact that TU matrices are closed under deletions of rows and columns~\cite{NemWols:1988}. The proof of Lemma~\ref{lem_ew} is straightforward and is ommited.
\begin{lemma} \label{lem_ew}
Every submatrix of a strongly unimodular matrix is strongly unimodular.
\end{lemma}
\begin{lemma} \label{lem_su22}
A TU matrix having at most two non-zeros in every column (row) is SU.
\end{lemma}
\begin{proof}
Let $A$ be a TU matrix with at most two non-zeros in every column. The case in
which $A$ has two non-zeros in every row can be handled in much the same way. Let us set
a nonzero of column $i$ of $A$ to $0$ and call $A'$ the matrix so-obtained. Now every submatrix of $A'$ either is equal to the corresponding
submatrix of $A$; or we can expand the determinant of the submatrix of $A'$
along column $i$ (which has at most one nonzero being $\pm{1}$) and observe that the
determinant of $A'$ is actually equal, up to $\pm{1}$ scaling, to
the determinant of a submatrix of $A$.
\end{proof}
\noindent
As shown in the following result, SU matrices are closed under fundamental matrix operations.
\begin{lemma} \label{lem_opra}
SU matrices are closed under the following operations:
\begin{itemize}
\item [(i)] transposing,
\item[(ii)] adding a zero row or column,
\item [(iii)] adding a unit column or a unit row, and
\item [(iv)] repeating a column or a row
\end{itemize}
\end{lemma}
\begin{proof}
Part (i) is trivial since the determinant of any submatrix remains unchanged under transposing. For (ii), let $A'$ be the matrix obtained from the addition of a zero row or column to a matrix $A$.Clearly, the replacement of any nonzero of $A'$ by a zero has to take place to the submatrix of $A'$ which is equal to $A$. But $A$ is SU and therefore we have that the matrix so-obtained is a TU matrix plus a zero column (row). The result now follows from the fact that TU matrices are closed under the addition of a zero row or column~\cite{Schrijver:86}.
For (iii), let's add a unit column $a$ to an SU matrix $A$ and let's call $A'=[A\;a]$ the matrix so-obtained. The case in which a unit row is added can be handled similarly. If we change the nonzero of column $a$ to zero then this is equivalent of adding a zero row to a TU matrix and therefore the matrix so-obtained remains TU. If we change any other nonzero of $A'$ to zero then this has to be an element of the part $A$ of $A'$; let us change such a nonzero to zero and call $A''=[B\;a]$ the new matrix. We shall show that any submatrix of $A''$ is TU. Obviously, any submatrix of $B$ is TU because $A$ is an SU matrix. In the remaining case, we can expand the determinant of a submatrix along column $a$ and observe that this determinant is a $\pm{1}$ multiple of the determinant of a submatrix of $B$.
For (iv), let $A'=[A\; a_1]$ be an SU matrix and let $a_1$ be a column of $A'$
which we repeat in order to construct the matrix $[A \;a_1 \;a_1]$. We note
here that the case of repeating a row can be handled in the same way. The only
case which has to be examined is the one in which a nonzero element of a
column $a_1$ becomes zero, since for all the other cases all the submatrices
of the matrix obtained are easily checked to be TU. Let $a_1'$ be the matrix
obtained from turning a nonzero of a column $a_1$ to zero, then the only
submatrices of $A''=[A \; a_1' \; a_1]$ which has to be examined of being TU are
those containing parts of column $a_1'$ and $a_1$, since all the other
submatrices are trivially TU. After expanding now the determinant of such a
submatrix of $A''$ along the column $a_1'$ and also expanding the determinant of
the same submatrix of $[A \; a_1 \; a_1]$ along $a_1$ we see that these two
determinants differ by a determinant of a TU matrix. Thus, these
determinants differ by $0$ or $\pm{1}$. But the determinant of the submatrix
of $[A \; a_1 \; a_1]$ is equal to zero and therefore we have that the
determinant of the corresponding submatrix of $A''$ is either $0$ or $\pm{1}$.
\end{proof}
In what follows the operations of $k$-sum $(k=1,2,3)$ are of central importance.
\begin{definition}\label{def_k-sums}
If $A, B$ are matrices, $a,d$ are column vectors and $b,c$ are row vectors of
appropriate size in $\mathbb R$ then we define the following matrix operations
\begin{description}
\item[1-sum:] $A\oplus_1 B:=\begin{bmatrix}A & 0 \\ 0 & B\end{bmatrix}$
\item[2-sum:] $\begin{bmatrix}A & a\end{bmatrix}\oplus_2 \begin{bmatrix}b\\B\end{bmatrix}:=\begin{bmatrix}A & ab\\0&
B\end{bmatrix}$
\item[3-sum:] $\begin{bmatrix}A & a & a\\c & 0 & 1\end{bmatrix}\oplus_3 \begin{bmatrix}1 & 0 & b\\d & d & B\end{bmatrix}:=
\begin{bmatrix}A & ab\\dc& B\end{bmatrix}$ or \\
\hspace*{.13in}$\begin{bmatrix}A & 0 \\b & 1 \\ c & 1 \end{bmatrix}\oplus^3 \begin{bmatrix}1 & 1 & 0\\
a& d & B\end{bmatrix}:=\begin{bmatrix}A & 0 \\ D & B\end{bmatrix}$ \\
where, in the $\oplus^3$, $b$ and $c$ are $\mathbb{R}$-independent row vectors and $a$ and $d$ are $\mathbb{R}$-independent column vectors
such that $[\frac{b}{c}]=[D_1|\bar{D}]$, $[a|d]=[\frac{\bar{D}}{D_2}]$ and $\bar{D}$ is a square non-singular matrix.
Then, $D=[a|d]\bar{D}^{-1}[\frac{b}{c}]$.
\end{description}
\end{definition}
Network matrices and TU matrices are known to be closed under these operations (see \cite{PiPa:09} and \cite{Schrijver:86}, respectively). These operations have been originally defined in the more general framework of regular matroids in~\cite{Seymour:1980} and here we present the special form of these operations as they applied to the compact representation matrices in $\mathbb{R}$, i.e. TU matrices.
In the lemmas that follow we show that SU matrices are closed under the $1$-sum and $2$-sum operations.
\begin{lemma} \label{lem_1-s}
If $A$ and $B$ are SU matrices then the matrix
$N=A\oplus_{1}B=\begin{bmatrix}A & 0\\0& B\end{bmatrix}$ is an SU matrix.
\end{lemma}
\begin{proof}
Since $A$ and $B$ are TU and from the fact that TU matrices are closed under $1$-sums we have that $N$ is TU. It remains to be shown that if we change a nonzero of the submatrix $A$ (or $B$) of $N$ to zero then the matrix $N'=\begin{bmatrix}A' & 0\\0& B'\end{bmatrix}$ obtained by this change is TU. Since $A$ and $B$ are SU we have that $A'$ and $B'$ are TU and from the fact that TU matrices are closed under the $1$-sum operation we have that $N'$ is TU as well.
\end{proof}
\begin{lemma} \label{lem_2-s}
If $A=\begin{bmatrix}A' & a\end{bmatrix}$ and
$B=\begin{bmatrix}b\\B'\end{bmatrix}$ are SU matrices then the matrix
$N=A\oplus_2 B=\begin{bmatrix}A' & ab\\0&
B'\end{bmatrix}$ is an SU matrix.
\end{lemma}
\begin{proof}
Since TU matrices are closed under $2$-sums we have that the matrix $N$, which
is the $2$-sum of the TU matrices $A$ and $B$, is TU. It remains to be shown
that changing a nonzero of $N$ to zero the matrix $N'$ so-obtained is also TU.
We consider the following two cases separately: (i) we replace a nonzero of
the submatrix $A'$ or $B'$ of $N$ by zero, and (ii) we replace a nonzero
element of the $ab$ submatrix of $N$ by zero.
For case (i) we can assume without loss of generality that we change a nonzero element of $A'$ to zero and let us call $\bar{N}=\begin{bmatrix} \bar{A'} & ab\\0&
B'\end{bmatrix}$ the matrix so-obtained (the case in which a nonzero element of $B'$ is changed is similar). Therefore, matrix $\bar{N}$ is the $2$-sum of the matrix $\begin{bmatrix} \bar{A'} & a\end{bmatrix}$ and $\begin{bmatrix}b\\B'\end{bmatrix}$, where $\begin{bmatrix} \bar{A'} & a\end{bmatrix}$ is a TU matrix since it is obtained from the SU matrix $A$ by replacement of a nonzero by a zero, and $\begin{bmatrix}b\\B'\end{bmatrix}$ is TU since it is equal to matrix $B$. From the fact that TU matrices are closed under $2$-sums the result follows.
For case (ii), let $N'$ be the matrix obtained from changing a nonzero of the
$ab$ part of $N$ to zero. We shall show that $N'$ is TU. Since SU matrices are closed under row and column permutations, we can assume that
$N'=
\left[
\begin{array}{cc|c}
A' & a_1 & ab_2 \\
0 & b_1 & B_1
\end{array}
\right]
$, where $a_1$ contains the nonzero having changed and thus differs from
column $a$ only to that element, $b_1$ is the
first column of $B'$ and $B_1$ is the rest of it, i.e. $B'=[b_1 \; B_1]$, and
$B$ has as first row the vector $[1 \; b_2]$, where the first element is $1$ since we assumed that $a$ is not a zero vector, i.e.
$B=\left[\begin{array}{cc}
1 & b_2 \\
b_1 & B_1
\end{array}
\right]$. We can easily see that $N'$ is the $3$-sum of the following two matrices
\[
\hat{A}=
\left[
\begin{array}{cc|cc}
A' & a_1 & a & a \\
0 & 1 & 0 & 1
\end{array}
\right]
\
\textrm{ and }
\hat{B}=
\left[
\begin{array}{ccc}
0 & 1 & b_2 \\
b_1 & b_1 & B_1
\end{array}
\right]
\]
Since TU matrices closed under $3$-sums it suffices to show that each of
$\hat{A}$ and $\hat{B}$ is TU. We know that $[A \; a \; a]$ is SU because of
Lemma~\ref{lem_opra}~(iv); moreover, from (iii) of the same Lemma we have that
$
\left[
\begin{array}{ccc}
A' & a & a \\
0 & 0 & 1
\end{array}
\right]
$
is SU. Applying again (iv) of Lemma~\ref{lem_opra}, we have that
$\tilde{A}=
\left[
\begin{array}{cc|cc}
A' & a & a & a \\
0 & 1 & 0 & 1
\end{array}
\right]
$
is SU. Thus, changing a specific nonzero from a column
$\left[
\begin{array}{r}
a \\
1
\end{array}
\right] $ of $\tilde{A}$ to zero we obtain
$\hat{A}$ which has to be TU.
For $\hat{B}$ now, by the fact that $B$ is SU and Lemma~\ref{lem_opra}, the matrix $\tilde{B}=
\left[
\begin{array}{rrr}
1 & 1 & b_2 \\
b_1 & b_1 & B_1
\end{array}
\right]
$ is SU. Thus, replacing a $1$ of a column
$\left[
\begin{array}{r}
1 \\
b
\end{array}
\right] $ of $\tilde{B}$ we obtain matrix $\hat{B}$ which has to be TU. Since both
$\hat{A}$ and $\hat{B}$ are TU the result follows.
\end{proof}
In what follows we shall make use of the following regular matroid decomposition theorem by Seymour~\cite{Seymour:1980}.
\begin{theorem} \label{th_Seym}
Every regular matroid $M$ may be constructed by means of $1$-, $2$-, and
$3$-sums starting with matroids each isomorphic to a minor of $M$ and each
either graphic or cographic or isomorphic to $R_{10}$.
\end{theorem}
\noindent
The $R_{10}$ regular matroid is a ten-element matroid, which can be found in~\cite{Oxley:06, Truemper:98}, and it has the following two unique totally unimodular compact representation matrices $B_1$ and $B_2$, up to row and column permutations and scaling of rows and columns by $-1$
\begin{equation}\label{eq_B1}
B_1=
\kbordermatrix{\mbox{}& 1 & 2 & 3 & 4 & 5 \\
1 & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} \\
2 & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} \\
3 & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} \\
4 & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} \\
5 & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1}
}
\mspace{40mu}
B_2=\kbordermatrix{\mbox{}& 1 & 2 & 3 & 4 & 5 \\
1 & 1 & 1 & 1 & 1 & 1 \\
2 & 1 & 1 & 1 & 0 & 0 \\
3 & 1 & 0 & 1 & 1 & 0 \\
4 & 1 & 0 & 0 & 1 & 1 \\
5 & 1 & 1 & 0 & 0 & 1 \\
}
\end{equation}
A consequence of theorem~Theorem \ref{th_Seym}
is the construction Theorem~\ref{Seymour_matrix} for totally unimodular
matrices which appears in \cite{Seymour:95,Truemper:98}.
\begin{theorem} \label{Seymour_matrix}
Any TU matrix is up to row and column permutations and scaling by $\pm{1}$ factors a network matrix, the transpose of a network matrix, the matrix $B_1$
or $B_2$ of (\ref{eq_B1}),or may be constructed recursively by these matrices using matrix $1$-, $2$- and $3$-sums.
\end{theorem}
According to Theorem~\ref{Seymour_matrix}, the building blocks of totally
unimodular matrices are network matrices and their transposes as well as the
matrices $B_1$ and $B_2$ in \eqref{eq_B1}.
\begin{lemma}\label{lem_bb}
$B_1$ and $B_2$ are not SU.
\end{lemma}
\begin{proof}
If we make the value of the $(4,3)^{\textrm{th}}$-element of $B_1$ from $-1$ to $0$ then in the matrix so-obtained the
$3\times{3}$ submatrix defined by rows $3,4$ and $5$ and columns $2,3$ and $4$
has determinant equal to $+2$. Therefore, $B_1$ is not SU. Similarly, if we make the value of the $(4,1)^{\textrm{th}}$-element of
$B_2$ from $+1$ to $0$ then in the matrix so-obtained, the $3\times{3}$
submatrix defined by rows $3,4$ and $5$ and columns $1,4$ and $5$ has
determinant equal to $-2$ and thus, $B_2$ is not SU.
\end{proof}
By Theorem~\ref{Seymour_matrix} and Lemma~\ref{lem_bb} we obtain the following result.
\begin{theorem}
Any SU matrix is up to row and column permutations and scaling by $\pm{1}$ factors a network
matrix, the transpose of a network matrix, or may be constructed recursively by these matrices using matrix $1$-, $2$- and $3$-sums.
\end{theorem}
The following theorem, known as the splitter theorem for regular matroids, is one of the most important
steps which led to the regular matroid decomposition theorem~\cite{Seymour:1980}.
\begin{theorem}\label{th_r100}
Every regular matroid can be obtained from copies of $R_{10}$ and from
$3$-connected minors without $R_{10}$ minors by a sequence of $1$-sums and $2$-sums.
\end{theorem}
Combining the above we can now state the main result of this section.
\begin{theorem} \label{th_lak}
A matrix is SU if and only if it is decomposable via $1$- and $2$-sums into strongly unimodular matrices
representing $3$-connected regular matroids without $R_{10}$ minors.
\end{theorem}
\begin{proof}
The ``if part" follows directly from Lemmata~\ref{lem_1-s},~\ref{lem_2-s}. For the ``only if" part, let $A$ be an SU matrix. By definition, $A$ is TU and therefore, by Theorem~\ref{th_r100}, may be obtained from $1$- and $2$-sums from matrices representing $R_{10}$ and $3$-connected matroids without $R_{10}$ minors. By Lemma~\ref{lem_bb}, the two unique representations for $R_{10}$ are not SU and therefore, $A$ can only be obtained from $3$-connected matrices without $R_{10}$ minor.
\end{proof}
In view of Theorem~\ref{th_lak} we can see that an SU matrix can be decomposed via $1$-sums and $2$-sums into a special class of SU matrices.
This class will be characterised in the following section.
\section{The Network Structure of the Decomposition Blocks} \label{sec_3c}
By Theorem~\ref{th_lak} we have that SU matrices are decomposable into smaller SU matrices which represent $3$-connected regular matroids without
$R_{10}$ minors. In this section we shall characterise the structure of these smaller matrices in Theorem~\ref{th_final}.
It is known that any $3$-connected binary matroid contains the wheel matroid $\mathcal{W}_3$ as a minor (Lemma~5.2.10 in~\cite{Truemper:98}), that is the graphic matroid with representation the wheel graph $W_3$ (i.e. the undirected graph obtained from the graphs in Figure~\ref{fig_w3} by omitting the directions).
In the following result we show that there exist two TU representation matrices for $\mathcal{W}_3$, one SU and one non-SU.
\begin{lemma} \label{lem_w3ne}
Up to row and column permutations and scaling by $-1$, the matroid $\mathcal{W}_3$ has two different totally unimodular compact representation matrices, namely
\begin{enumerate}
\item[(i)] an SU representation
$
N_1=\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$, and
\item[(ii)] a non-SU representation
$
N_2=\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$
\end{enumerate}
\end{lemma}
\begin{proof}
Since the graphic matroids are uniquely representable over any field, given a TU compact representation of $\mathcal{W}_3$
we can obtain any other compact representation by row and column permutations, scaling of rows and columns by $-1$ and pivoting.
Since $\mathcal{W}_3$ is a graphic matroid, each of its TU compact representation matrices is a network matrix as well.
Pivoting in a network matrix results to a network matrix with respect to another spanning tree of the same graph.
Specifically, up to graph isomorphism, graph $W_3$ has two different spanning trees which are depicted in Figure~\ref{fig_w3}, where solid edges
correspond to the tree edges. Thus, up to row and column permutations and scaling by $-1$, there are two different network matrices
representing $\mathcal{W}_3$; namely:
$
N_1=\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$, and
$
N_2=\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$. It is now easy to see that if we replace any nonzero of
$
\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$ by a $0$ then all the matrices so-obtained are TU. On the other hand, if we replace the nonzero at third row and second column of
$
\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$ by $0$ then the matrix so-obtained is not TU.
\end{proof}
\begin{figure}
\caption{The two possible network representations of graph $W_3$ where the network matrix associated with (1)
is SU while in (2) it is non-SU.}
\label{fig_w3}
\end{figure}
We shall now prove the following important theorem which shows that SU representation matrices of $3$-connected regular matroids
can not have certain $2\times{2}$ matrices as submatrices.
\begin{theorem} \label{th_tr23}
If $N$ is an $m\times{n}$ representation matrix $(m,n\geq{3})$ of a $3$-connected regular matroid
containing, up to row and column permutations and scalings by $-1$, the submatrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$,
then $N$ is not SU.
\end{theorem}
\begin{proof}
Since $N$ is the representation matrix of a connected matroid we have that
it has an $M(W_2)$ minor (see Lemma~5.2.10 in \cite{Truemper:98}), where $W_2$ is the wheel graph with two spokes. Furthermore,
the matrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$ under any row and column permutations and scalings by $-1$
factors displays $M(W_2)$. Enlarge this $2\times{2}$ submatrix
to a maximal submatrix containing only $1$s. Let us call $D$ that submatrix and index its
rows and columns by $R$ and $S$, respectively. Furthermore, in
the partitioned $N$, as it is depicted in \eqref{eq_1} below, each row of the submatrix $U$ and each column of
the submatrix $V$ is assumed to be nonzero. From our assumption that $D$ is
maximal we have that each row and each column of $U$ and $V$, respectively, must have at least one
zero element.
\begin{equation} \label{eq_1}
N =
\kbordermatrix{
& S & & Q & & \\
R & D & \vrule & V & \vrule & 0 \\ \cline{2-6}
P & U & \vrule & \{0,\pm{1}\} & \vrule & \{0,\pm{1}\} \\ \cline{2-6}
& 0 & \vrule & \{0,\pm{1}\} & \vrule & \{0,\pm{1}\}
}
\end{equation}
Let $BG(N)$ be the bipartite graph of $N$ and let $F$ be its
subgraph obtained from the deletion of the edges corresponding to the $1$s of
$D$. Since $N$ is the representation matrix of
a $3$-connected regular matroid, we have that there must exist a path in $F$
connecting a vertex of $R$ with a vertex of $S$ (see Lemma~5.2.11 in~\cite{Truemper:98}) which, due to the
bipartiteness of $F$, it has to be of odd
length. If we assume that the length of that path is $3$ then
the matrix $N_2$ of Lemma~\ref{lem_w3ne} is a submatrix of $N$,
which implies that $N$ is not SU.
If the shortest path connecting a vertex of $R$ with a vertex of $S$ has
length greater than $3$ then we
will show that the matrix $N$ is also non-SU. Let's say
that the shortest path lies between the vertices $r_2$ and $s_2$ of $R$ and
$S$, respectively. Then $N$ will have the following
submatrix $M$:
\begin{equation*}
M=
\kbordermatrix{\mbox{}& q_1 & q_2 & & \ldots & & q_n & s_2 & s_1\\
r_2 & \pm{1} & 0 & 0 & & 0 & 0 & \pm{1} & \pm{1}\\
p_n & \pm{1} & \pm{1} & 0 & \ldots & 0 & 0 & 0& 0 \\
p_{n-1} & 0 & \pm{1} & \pm{1} & &0 &0 &0 &0\\
\vdots & & \vdots & & \ddots & & &\vdots & \\
p_1 & 0 & 0 & 0 & & 0 & \pm{1} & \pm{1} & 0\\
r_1 & 0 & 0 & 0 &\ldots & 0 & 0 & \pm{1} & \pm{1}
}
\end{equation*}
where $\{r_1,r_2\}\in{R}$, $\{s_1,s_2\}\in{S}$, $\{p_1,\ldots,p_n\}\in{P}$ and $\{q_1,\ldots,q_n\}\in{Q}$.
Moreover, we have that $M$ will have no zeros in the main diagonal and in the diagonal below the
main because of the path existing between $r_2$ and $s_2$. The submatrix of $M$ having rows indexed by $r_1$ and $r_2$ and columns indexed by
$s_1$ and $s_2$ is full of ones because it is submatrix of $D$. Furthermore,
we have zeros in the position indexed by $r_1$ and $q_1$ and in the position
indexed by $p_1$ and $s_1$ because we can assume that there exists at least one
vertex of $R$ not being adjacent to $q_1$, which we call $r_1$, and similarly we
can assume that there exists a vertex of $S$ not being adjacent to $p_1$, which
we call $s_1$. All the other zeros in $M$ are due to the fact that the path
between $r_2$ and $s_2$ is the shortest between a vertex of $R$ and a vertex of
$S$ in the graph $F$.
We shall now show that matrix $M$ is not SU. If we
expand the determinant of $M$ along the first row then this determinant is equal
to the sum of the determinants of three TU matrices being triangular with no
zero in the diagonal. Therefore, it is easy now to see that there exists a nonzero in the
first row of $M$ such that if we replace it by a zero and expand the determinant
of the matrix so-obtained along the first row then we have that the determinant of this
matrix will be $2$ or $-2$. Therefore, $N$ has a submatrix $M$ being non-SU
and by Lemma~\ref{lem_ew}, $N$ is not $SU$.
\end{proof}
Crama et al. in~\cite{CraLoPo:92} proved that if $A$ is an SU matrix then we can partition its rows as stated in the following theorem.
\begin{theorem} \label{th_cr11}
If $A$ is an SU matrix, then there exists a partition $(S_1,\ldots,S_k)$ of the rows of $A$ with the following properties:
\begin{itemize}
\item[(i)] every column of $A$ has $0, 1$ or $2$ nonzero entries in each $S_i$, for $i=1,\ldots,k$;
\item[(ii)] if a column has exactly one nonzero entry in some $S_i$, then all its entries in $S_{i+1},\ldots,S_k$ are zeros.
\end{itemize}
\end{theorem}
Since by (i) of Lemma~\ref{lem_opra}, SU matrices are closed under taking the transpose we can restate Theorem~\ref{th_cr11}
for the columns of an SU matrix. Consider an SU matrix $A'$ and let
$\mathcal{S}=(S_1, S_2,\ldots,S_k)$ be the partition of its rows as
determined by Theorem~\ref{th_cr11} and
$\mathcal{T}=(T_1, T_2,\ldots,T_l)$ be the partition of the rows of the transpose of $A'$
as determined by Theorem~\ref{th_cr11}. Then by permuting rows and
columns of $A'$ we can obtain the following SU matrix $A$:
\begin{equation} \label{eq_3}
A =
\kbordermatrix{
& T_1 & T_2 & \cdots & T_l \\
S_1 & A_{1,1} & A_{1,2} & \cdots & A_{1,l} \\
S_2 & A_{2,1} & A_{2,2} & \cdots & A_{2,l} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
S_k & A_{k,1} & A_{k,2} & \cdots & A_{k,l} \\
}
\end{equation}
where we have that each $A_{i,j}$ is the submatrix
of $A'$ defined by the rows of $S_i$ and columns of $T_j$. We are now ready to state the main result of this section.
\begin{theorem}\label{th_final}
Let $A$ be an SU matrix representation of a $3$-connected regular matroid being in
the form of (\ref{eq_3}). Then the following hold:
\begin{itemize}
\item[(i)] $A_{1,1}$ has $0$ or $2$ non-zeros in each column and row
\item[(ii)] each column of $A_{1,j}$ has $0$ or $2$ non-zeros and each row of
$A_{i,1}$ has $0$ or $2$ nonzero elements
\item[(iii)] if an $A_{i,j}$ has $2$ non-zeros in each column and each
row then, up to row and column permutations,
\[
A_{i,j}=
\left[
\begin{array}{cccccc}
\pm{1} & & & & & \pm{1} \\
\pm{1} & \pm{1} & & & & \\
& \pm{1} & \pm{1} & & & \\
& & & \ddots & & \\
& & & & \pm{1} & \\
& & & & \pm{1} & \pm{1}
\end{array}
\right]
\]
\end{itemize}
\end{theorem}
\begin{proof}
For (i) and (ii), by way of contradiction, it is enough to observe that if there was a column (row)
with exactly one nonzero, then by Theorem~\ref{th_cr11} this column (row) would be a unit column (row). This
would mean that the matroid represented by $A$ has a $2$-separation (see e.g. Lemma~3.3.20 in~\cite{Truemper:98}), which contradicts our hypothesis that this matroid is $3$-connected.
For (iii), from Theorem~\ref{th_tr23} we have that $A_{i,j}$ can not have the
matrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$ as submatrix. It is now straightforward to see that $A_{i,j}$ has the form
described in (iii).
\end{proof}
\section{Conclusion}
A new decomposition theory for SU matrices with blocks being matrices representing simple networks has been proposed. Specifically, whenever an SU matrix is not network then it has a clear network structure since each block in the aforementioned decomposition of $A$ is the node-incident matrix of a directed graph. Moreover, decomposition matroidal results were related to SU matrices with the prospect to be utilized in real-life important problems. Such a field is that of complex networks modelling numerous real-life problems (see e.g.~\cite{EasKle:10,Jack:10,New:2010}). Most importantly, we strongly believe that this decomposition may be used for the development of a recognition algorithm for SU matrices which will not depend on the total-unimodularity test and would be a much more efficient, since it will utilize the network structure of the blocks of SU matrices.
\end{document}
|
math
|
इस स्कीम में इन्वेस्ट करने वालों को लॉकडाउन में भी मिल रहा पैसा, आपने किया या नहीं
होम लाटेस्ट न्यूज वायरल बिजनेस इस स्कीम में इन्वेस्ट करने वालों को लॉकडाउन में भी मिल रहा पैसा, आपने किया या नहीं
पोस्ट ऑफिस की मंथली इनकम स्कीम में हर महीने मिलता है पैसा
नई दिल्ली। हर आदमी चाहता है कि उसे अपने निवेश पर बढ़िया रिटर्न मिले। कोरोना संकट के बीच तो अगर ऐसा कोई निवेश हो जो हर महीने कमाई का जरिया हो तो इससे बढ़िया और क्या होगा। अगर आपने पोस्ट ऑफिस की मंथली इनकम स्कीम (मिस) में पैसा लगाया हुआ है तो आपके लिए ये फायदेमंद साबित हो रहा होगा। मिस ऐसी स्मॉल सेविंग स्कीम है, जिसमें पैसे लगाने पर आपको हर महीने कमाई का मौका मिलता है। पोस्ट ऑफिस की इस स्कीम में एकमुश्त निवेश करने से हर महीने बतौर ब्याज आपको इनकम होती है। इस अकाउंट की मैच्योरिटी अवधि ५ साल की होती है। हम आपको इस स्कीम के बारे में डिटेल से बताते हैं
मिस स्कीम के तहत सिंगल और जॉइंट दोनों तरह से अकाउंट खोला जा सकता है। इंडिविजुअल खाता खोलते समय आप इस स्कीम में न्यूनतम १,००० रुपये और अधिकतम ४.५ लाख रुपये का निवेश कर सकते हैं। लेकिन, जॉइंट अकाउंट के मामले में अधिकतम ९ लाख रुपये तक जमा किए जा सकते हैं। रिटायर्ड और सीनियर सिटिजन्स के लिए यह काफी फायदे की स्कीम है।
अगर जॉइंट अकाउंट खुलवाया गया है तो बतौर ब्याज मिलने वाली इनकम को बराबर हिस्सों में हर खाताधारक को दिया जाता है।
जॉइंट अकाउंट को कभी भी सिंगल में बदला जा सकता है। इसी तरह सिंगल को जॉइंट में कन्वर्ट किया जा सकता है।
अकाउंट में बदलाव कराने के लिए जॉइंट ऐप्लिकेशन देनी होती है।
मिस खाते से पैसे निकालने की ये हैं शर्तें
अकाउंट खुलने से एक साल तक आप इसमें से पैसा नहीं निकाल सकते।
एक से तीन साल के बीच पैसा निकालते हैं तो जमा का २% काटकर वापस मिलेगा।
अकाउंट खुलने के ३ साल बाद मैच्योरिटी के पहले कभी भी पैसा निकालते हैं तो जमा रकम का १% काटकर वापस मिलेगा
किसी खास मौके पर इस स्कीम में जमा पैसे को मैच्योरिटी से पहले भी निकाल सकते हैं।
इस अकाउंट को आप एक से दूसरे पोस्ट ऑफिस में शिफ्ट करवा सकते हैं।
मैच्योयरिटी के ५ साल पूरे हो जाने के बाद आप रकम को दोबारा निवेश कर सकते हैं।
इसमें नॉमिनी नियुक्त किया जा सकता है, ताकि अनहोनी पर नॉमिनी को राशि मिल सके।
मिस योजना में टीडीएस नहीं कटता, लेकिन ब्याज पर टैक्स देना होता है।
|
hindi
|
Watch the new single for Donker Mag, entitled Pitbull Terrier over here. To buy a track, go here.
guess wat we gt cookin!
Donker Mag is due out in February 2014. In the meantime, check out the vid below for Cookie Thumper. I’ll also include a great documentary on Die Antwoord from someone that goes by Jimbo Stephens. If you’re a fan of either Yo-landi or Ninja, you’ll want to see this two-part series, entitled AKA: The Lives of Waddy Tudor Jones. It goes into quite a bit of depth on the previous incarnations of Die Antwoord and includes lots of very rare and very interesting footage.
And here’s a behind the scenes short on the making of Fatty Boom Boom. It covers all of the contributing visual artists that helped make such a vibrant and colourful piece. I think the best part is when Ninja reveals what his favourite words are. You may be surprised.
Finally, here’s Cookie Thumper, Track 2 on the upcoming album from Die Antwoord, Donker Mag, set for release next month.
Awesome! Thanks for the head’s up..been travelling and just getting back from Europe.
I had the pleasure of seeing Die Antwoord at Bonnaroo, and they put on a hell of a show. Not my usual musical taste but there’s just something about them that I love. Is there an artist that you were surprised that you liked when you started listening to them, or saw them live?
I’m constantly surprised my music and the people behind it. The best kind of discoveries are the kind that contradict your own suppositions, not just for music, but anything else for me, really. As far as live music goes, I rarely get a chance to go out, so most of my discoveries happen while online, at least as far as music is concerned!
← DG Exclusive :: Flume’s Deluxe Edition Mixtape :: Review.
|
english
|
Characterised by bespoke natural finishes and features, twenty Isabella Place is a unique property that offers the perfect coastal hideaway.
Inside the home, you will be impressed by the high vaulted ceilings, natural timber beams and exposed brickwork. Open plan living areas feature large windows and doors, opening the home to surrounding trees, the warm northern sunlight and fresh coastal breeze. The house sits high on the hill, which offers a beautiful perspective of the Kiama township, Blowhole Point, Surf Beach and the ocean beyond.
The home is cool and comfortable in summer, then cozy with the comfort of a central fireplace in Winter. Cook up a feast in the large kitchen with ample storage, plenty of bench space and beautiful ocean views. The living and dining areas flow through to a large balcony that offers the perfect place to enjoy a morning coffee, while you admire the sun rising over the ocean.
The floor plan offers plenty of space, with four bedrooms and two lounge rooms for the family to relax. The main bedroom is spacious and features a newly renovated ensuite, high ceilings, northern sunlight and balcony access to appreciate the ocean glimpses.
The gardens which grace this property are akin to a rainforest retreat, complete with a wide variety of tropical ferns, native trees and established gardens. The covered outdoor entertaining area offers a great space to entertain friends and family in privacy, surrounded by a perfect tranquil garden setting.
Located in a quiet cul-de-sac within walking distance of the CBD and Surf Beach, you will enjoy shopping at the local produce markets, boutique shops, continental eateries and a wide choice of coffee shops.
If you've been searching for a completely unique home that offers an eclectic mix of natural textures, tones and intricate features, twenty Isabella is the perfect coastal hideaway.
|
english
|
\begin{document}
\ShortArticleName{Algebraic Formulation of a Data Set}
\AuthorNameForHeading{Wenqing (William) Xu}
\ArticleName{Deriving Compact Laws Based on Algebraic Formulation of a Data Set}
\Author{Wenqing (William) Xu$^\dagger$ and Mark Stalzer$^*$}
\Address{$^\dagger$California Institute of Technology
\EmailD{wxu@caltech.edu}
}
\Address{$^*$California Institute of Technology
\EmailD{stalzer@caltech.edu}
}
\begin{abstract}
In various subjects, there exist compact and consistent relationships between input and output parameters. Discovering the relationships, or namely compact laws, in a data set is of great interest in many fields, such as physics, chemistry, and finance. While data discovery has made great progress in practice thanks to the success of machine learning in recent years, the development of analytical approaches in finding the theory behind the data is relatively slow. In this paper, we develop an innovative approach in discovering compact laws from a data set. By proposing a novel algebraic equation formulation, we convert the problem of deriving meaning from data into formulating a linear algebra model and searching for relationships that fit the data. Rigorous proof is presented in validating the approach. The algebraic formulation allows the search of equation candidates in an explicit mathematical manner. Searching algorithms are also proposed for finding the governing equations with improved efficiency. For a certain type of compact theory, our approach assures convergence and the discovery is computationally efficient and mathematically precise.
\end{abstract}
\section{Introduction}
Data driven discovery, which involves finding meaning and patterns in data, has been experiencing significant progress in quantifying behaviors, complexity, and relationships among data sets \cite{stalzer}. In various subjects, such as physics, chemistry, and finance, there exist relationships between various parameters. These relationships can be discovered by proofs, conjecture, or approximated using assumptions via the scientific method. The scientific method has been the mainstay of understanding the laws that govern the universe. The method is based on observations that provide data in order to develop theories to predict future observations. It often starts with a mathematical hypothesis which is then verified by seeing how well the observed data fits a hypothesized model. However, the scientific method is rarely applied in the converse, formulating a plausible mathematical hypothesis using data.
Algorithms are developed to autonomously discover these relationships using sets of data alone, organized into input and output data. This is accomplished by proposing a series of candidate equations, plugging in the values of the data set into each equation, and determining how well the data fits each equation, typically using least squared methods. Notable progress has been made in applying different approaches \cite{predictive}.
$\\$
$\\$
Despite the progresses, challenges still exist. One problem with existing approaches is that without any assumptions on the relationship's format, arriving at the desired candidate equation is computationally slow. To address this, there are various algorithms that enumerate through these candidates. These algorithms must use some method of quantifying equation complexity in order to organize its enumeration and ensure every candidate equation of that complexity is written. One such method is the representation of an equation as a tree, where the nodes represent operators, the leaves represent data, and the complexity calculated as the sum of nodes and leaves \cite{Distilling}. However, this still means that the number of candidates increases exponentially with respect to complexity, meaning any brute-force algorithm causes high complexity equations to take an unreasonable amount of time to reach and verify against the data \cite{mining}.
$\\$
$\\$
The second problem is about constants. Many natural relationships have constants as part of their equations. An algorithm that enumerates through candidate equations does not take into account its constants. The Pareto frontier technique can be used to calculate these constants for the candidate equations. However, this method only gives you an approximation of the constants. In addition, this method does not explicitly rule out any candidate equations, as it accepts candidate equations and constants with a squared residual within a bound \cite{Distilling}. The sparsity of a given data set can also be used to bound the coefficients of the desired compact law \cite{sparse}. To the knowledge of the authors, there is currently no algorithm that explicitly rejects candidate equations that cannot be fitted with constants to the data, and also explicitly finds constants that allow candidate equations to be fitted to the data.
$\\$
$\\$
The third problem is on narrowing down the enumeration for the candidate equations. Brute-force methods lead to an unacceptable program run time \cite{runtime}. To combat this, algorithms have employed genetic algorithms and neural networks to introduce speed-ups in the program. Genetic algorithms introduce slight mutations in a candidate equation to single out operators and constants that fit the data well \cite{geneticdata}. Mutated equations that are promising, through some metric, generate equations with similar attributes, some with further mutations. This process is repeated until a candidate equation is found that can fit the data \cite{genetic}.
$\\$
$\\$
To combat the challenges, various approaches have been proposed. Machine learning based on the Neural Networks has shown its effective way in developing relationships using high throughput experimental data in novel ways \cite{machinelearning}. It is recognized that, for many applications, it is far easier to train a system using desired input-output examples than enumerating rules to obtain the desired response. Although convergence to the desired input-output relationships can be achieved via intensive and extensive training, there are no methods to prove that these machine learning algorithms converge onto a natural law based on the data \cite{practical}.
$\\$
$\\$
A new and prospective area of data-driven discovery is the development of automated science. Automated science involves creating algorithms that analyze data sets in order to create compact laws governing that data. A compact law refers to mathematically explicit description or equation that exactly describes the data \cite{compactlaw}. Among recent advances, one approach is based on statistical and model driven methods, for example, the use of Bayesian probabilistic methods and Markov models \cite{probability} as the basis of an intelligent system \cite{Prob}, and the expectation maximization algorithm, which converges to a maximum likelihood estimate based on incomplete data \cite{expectation}. Linear algebraic methods are applied to this field in order to improve run time and ensure convergence to a compact law. One such method is the use of randomized algorithms to decompose and diagonalize sparse matrices. As a result, this can be used to approximate a time dependent system, such as the Maxwell equations, using a Markov model, and thus approximate a system's behavior over time \cite{parallel}. In addition, the use of proper orthogonal decomposition on some sets of data can identify linearly dependent data sets to quickly classify bifurcation regimes in non-linear dynamical systems \cite{compressive}. Further, a method of using a library of functions acting on a sparse vector of constants. The resulting linear equation is then evaluated according to the data. This method was used to re-derive the equations governing the chaotic Lorentz system and fluid vortex shedding behind an obstacle \cite{discovering}. However, this method relies on separating data into independent and dependent variables. The independent variables are used to construct the function library and generate the compact law governing the dependent variables. Not all data can be cleanly separated into dependent and independent variables, so a method of incorporating all variables into a compact law is needed.
$\\$
$\\$
In this paper, we develop an innovative approach in discovering compact laws. We propose a novel algebraic equation formulation such that constant determination and candidate equation verification can be explicitly solved with low computational time. The algebraic formulation allows us to represent the evolution of equation candidates in an explicit mathematical manner. We also derive general methods for searching through a family of candidate equations and verifying them with respect to the data. The searching algorithms are defined conventionally and using a finite field to improve running time. The assumption of our approach on the compact law is that it is in some general format and incorporates only constants and variables related to the provided data. There are no assumptions on the data. We show that there is guaranteed convergence toward a valid equation candidate. Thus, for a specific type of compact theory, the discovery can be computationally efficient and mathematically precise. The proposed approach may have implications in many fields of data science, such as re-deriving natural laws of Physics, speculating in finance, and modeling chaotic, non-linear systems.
$\\$
$\\$
The paper is organized as follows. In section \ref{verif}, algebraic formulation is proposed for discovering equation candidates in a data set. An algorithm format to determine and verify if a candidate equation fits the data with respect to constants is presented. Proof of theorems for equation validation is shown in section \ref{validation}. In section \ref{algo}, search algorithms are proposed in finding candidate equations. We show the algorithm based on finite field sieve has improved performance over the exhaustive search algorithm. In section \ref{compmeth}, numerical results are presented using the proposed approach to re-derive the van der Waals equation from raw data, followed by concluding remarks in section \ref{conclusion}.
\section{Algebraic Formulation}
\label{verif}
\subsection{Virtual Experiment Setup}
\label{vexp}
Suppose we are presented with some data set, which are the inputs and outputs of some number of experiments. However, we do not know what data are the inputs and outputs. For each category of the data, every data entry must correspond to an experiment.
We can organize the data set in the format of a virtual experiment, as in a set of categories of data that relate to one another.
$\\$
Denote a set of $n$ characters representing the categories of our inputs/outputs for the virtual experiment as
\begin{equation}
F= \{F_1, \dots, F_n \}
\end{equation}
as the set of $n$ characters representing the categories of our inputs/outputs.
$\\$
For example, consider a virtual experiment that allows us to re-derive the classical force laws. This experiment involves two particles on a plane, some distance apart. Particle one has mass, charge, and velocity, while the other is a fixed mass and charge. Both are in a uniform magnetic field perpendicular to the plane. The set of input categories would be the masses of the two particles, $m_1,m_2$, the charges of the two particles, $q_1,q_2$, the distance between the two particles, $r$, the velocity of the first particle, $v_1$, and the strength of the magnetic field, $B$. This is denoted as
\[
F=\{ m_1, m_2, q_1, q_2, r, v_1, B, a \}.
\]
$\\$
As a result, the data sets which we will fit our force law equations onto will be of the form
\[
\{ m_{1,t}, m_{2,t}, q_{1,t}, q_{2,t}, r_t, v_{1,t}, B_{t}, a_t \}, 1 \leq t \leq r.
\]
The data from the force law experiment may be separated into input and output categories. However, given a data set, we need not assume whether each data category is an input or an output to discover an equation that describes the data. We can generalize the equations to only be in terms of the data set categories and constants, regardless of whether those categories are inputs or outputs.
\subsection{Equation Search Algorithm Format}
We define an equation search algorithm as a search algorithm that enumerates through equation candidates of a certain format, verifies them, and returns those that describe the data. The format for algebraic equation candidates, outputs of our search algorithm, will be of the form
\begin{equation}
0 = A_1 + \dots + A_s
\end{equation}
where all $A_i$, $1 \leq i \leq s$ is in the form
\begin{equation}
A_i=F_1^{f_1}\dots F_n^{f_n}, f_j \in Z^{+} \cup \{0\}, 1 \leq j \leq n,
\end{equation}
and for $\forall A_i$, $A_i \neq A_1, \dots, A_{i-1}, A_{i+1}, \dots$ under permutation of the elements of $F$.
$\\$
This format includes all possible algebraic equations involving elements of $F$ under constants.
$\\$
Let expression $A_i$ evaluated with values $F_{1,t}, \dots, F_{n,t}$ be denoted as $A_{i,t}$. We then define how an equation candidate is determined to describe the data set.
\begin{definition}
\label{valideq}
Define a ``valid equation candidate" of size $s$ and degree $d$ as an equation such that for all $r$ experiments, there exists a unique $k_2, \dots, k_s \in R$ such that for each experiment $t$,
\begin{equation}
0 = A_{1,t} + k_2A_{2,t} + \dots + k_sA_{s,t},
\end{equation}
and the maximum exponent of any $F_j$ of all $A_l$ is $d$.
\end{definition}
A valid equation candidate for the force laws \cite{classical} described in section $\ref{vexp}$ is $0=m_1m_2 + q_1q_2 + q_1v_1Br^2 + m_1ar^2$.
\subsection{Determination of Constants}
Suppose we have some equation candidate
\[
0 = A_1 + \dots + A_s.
\]
We then evaluate this equation for the $r$ virtual experiments, obtaining the numerical values of all $F_j$, and thus $A_i$. As a result, we can evaluate for the constants $k_2, \dots, k_s$ by solving the resulting matrix equation
\begin{equation}\label{datamat} \left[ \begin{array}{c}
-A_{1,1} \\
\vdots \\
-A_{1,r} \end{array}\right] =
\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right]
\left[ \begin{array}{c}
k_2 \\
\vdots \\
k_s \end{array}\right].
\end{equation}
Equation (\ref{datamat}) is equivalent to the matrix equation
\begin{equation} \left[ \begin{array}{c}
0 \\
\vdots \\
0 \end{array}\right] =
\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right]
\left[ \begin{array}{c}
k_2 \\
\vdots \\
k_s \end{array}\right]
- \left[ \begin{array}{c}
-A_{1,1} \\
\vdots \\
-A_{1,r} \end{array}\right].
\end{equation}
\begin{definition}
Let a data matrix of equation candidate
\[
0 = A_1 + \dots + A_s.
\]
with $r$ experiments be defined as
\begin{equation}
\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right]
\end{equation}
\end{definition}
\begin{definition}
Let $A^*$ denote the conjugate transpose of $A$ such that if
\[
A=\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right],
\]
\begin{equation}
A^*=\left[ \begin{array}{ccc}
\overline{A_{2,1}} & \dots & \overline{A_{2,r}}\\
\vdots & \ddots & \vdots \\
\overline{A_{s,1}} & \dots & \overline{A_{s,r}}\end{array}\right].
\end{equation}
where $\overline{A_{i,j}}$ is defined as the complex conjugate of $A_{i,j}$.
\end{definition}
\begin{definition}
\label{MPI}
The Moore-Penrose left pseudoinverse of $A \in M(m,n,R)$ is defined as $A^+ \in M(n,m,R),m,n \in Z^+$ such that
\begin{equation}
\label{mprdef}
A^+A=I,
\end{equation} the $n \times n$ identity matrix \cite{penrose}.
$\\$
If the columns of $A$ are linearly independent, then the Moore-Penrose left pseudoinverse is calculated as
\begin{equation}
\label{mprcalc}
A^+=(A^*A)^{-1}A^*
\end{equation}
such that $A^+A=((A^*A)^{-1}A^*)A=(A^*A)^{-1}(A^*A)=I$ (\cite{penrose}, Theorem 2).
\end{definition}
\noindent
\section{Equation Validation}
\label{validation}
\subsection{Theorems}
We show that the equation validation question (Definition \ref{valideq}) is equivalent to a linear algebra question.
\begin{theorem}
\label{leftinverse}
If $s \geq 2$, $s \in Z^+$,
$\\$
The Moore-Penrose left pseudoinverse $A^+$ of data matrix
\[
A=\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right]
\]
can be computed and $(AA^+-I)\vec{b}=\vec{0}$ if and only if its corresponding equation candidate
\[
0 = A_1 + \dots + A_s
\]
is valid.
\end{theorem}
\begin{proof}
If the equation candidate
\[
0 = A_1 + \dots + A_s
\]
is valid, then by definition \ref{valideq}, the definition of the candidate equation being valid, there exists a unique vector
\[
\vec{k} = \left[ \begin{array}{c}
k_2 \\
\vdots \\
k_s \end{array}\right], \text{ }k_2, \dots, k_s \in R \text{ such that}
\]
\begin{equation}
\left[ \begin{array}{c}
0 \\
\vdots \\
0 \end{array}\right] =
\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right]
\left[ \begin{array}{c}
k_2 \\
\vdots \\
k_s \end{array}\right]
- \left[ \begin{array}{c}
-A_{1,1} \\
\vdots \\
-A_{1,r} \end{array}\right].
\end{equation}
$\\$
Thus, if we define
\begin{equation}
\vec{b}=
\left[ \begin{array}{c}
-A_{1,1} \\
\vdots \\
-A_{1,r} \end{array}\right],
\end{equation}
we obtain the relationship
\begin{equation}
\label{relat1}
A\vec{k} = \vec{b}.
\end{equation}
As a result, the least squares problem min$|A\vec{k} - \vec{b}|$ has a unique solution, and so the columns of $A$ are linearly independent (\cite{LSQ}, 2.4).
$\\$
From (definition \ref{MPI}), we see that if the columns of $A$ are linearly independent, $A^*A$ is invertible, and so the Moore-Penrose left pseudoinverse can be computed as $A^+=(A^*A)^{-1}A^*$ (\ref{mprcalc}). As a result,
multiplying both sides of (\ref{relat1}) by $A^+$ gives us
\[A^+A\vec{k}=A^+\vec{b}, \] and using (\ref{mprdef}) yields
\begin{equation}
\label{eigen}
\vec{k}=A^+\vec{b}.
\end{equation}
Substituting (\ref{eigen}) in (\ref{relat1}) and subtracting $\vec{b}$ from both sides obtains
\begin{equation}
\label{eigenresult}
(AA^+-I)\vec{b}=\vec{0}.
\end{equation}
Assume the Moore-Penrose left pseudoinverse of data matrix $A$ can be computed as $A^+$ and $(AA^+-I)\vec{b} = \vec{0}$. We then have equation
\begin{equation}
\label{init2}
(AA^+-I)\vec{b} = AA^+\vec{b}-\vec{b} = \vec{0}.
\end{equation}
Substituting $A^+\vec{b} = \vec{k}$ in (\ref{init2}) yields the desired result
\[
A\vec{k} - \vec{b} = \vec{0}.
\]
We also see that we obtain some $\vec{k} \in R^{s-1}$.
$\\$
As a result, there exists a unique $k_2, \dots, k_s$ such that for all experiments $t$, $1 \leq t \leq r$,
\[
0 = A_{1,t} + k_2A_{2,t} + \dots + k_sA_{s,t},
\]
so by definition \ref{valideq}, the equation candidate
\[
0 = A_1 + \dots + A_s
\]
is valid.
\end{proof}
\noindent We then show some theorems and corollaries that follow immediately from theorem \ref{leftinverse}.
\begin{theorem}
For $F_i \in F,A_1$, $F_i=0$ is valid if and only if $0=A_1$ is valid.
\end{theorem}
\begin{proof}
If there $\exists F_i \in F,A_1$ such that $F_i = 0$ is valid, $F_{i,1}, \dots, F_{i,r} = 0$. Evaluating yields $A_{1,1}, \dots, A_{1,r} = 0$, and so $A_1=0$ is valid.
$\\$
If $A_1=0$ is valid, then assume string $A_1=B_1A_1^{'}$, where $B_1 \in F$. Since $R$ is a field, and all fields are integral domains \cite{abstract}, either $B_{1,1}, \dots, B_{1,r}=0$ or $A_{1,1}^{'}, \dots, A_{1,r}^{'} = 0$. If $B_1 \in F$, then $B_1=0$ is valid. Else, $A_1^{'}=0$ is valid, and so we repeat the previous procedure until we obtain $\exists B_i \in F,A_1$ such that $B_i=0$ is valid, or $A_1^{'}=1=0$ is valid, which is false in a field by definition.
\end{proof}
\begin{corollary}
\label{sr}
If $r < s-1$, $0 = A_1 + \dots + A_s$ cannot be valid.
\end{corollary}
\begin{proof}
If $r < s-1$, then there are more columns than rows in the candidate equation's data matrix
\[
A=\left[ \begin{array}{ccc}
A_{2,1} & \dots & A_{s,1}\\
\vdots & \ddots & \vdots \\
A_{2,r} & \dots & A_{s,r}\end{array}\right],
\]
and so the columns of $A$ are not linearly independent. Thus, $A^*A$ is not invertible (\cite{leftI}, Theorem 3), and so $A^+$ cannot be computed. By theorem $\ref{leftinverse}$, $0 = A_1 + \dots + A_s$ cannot be valid.
\end{proof}
\begin{corollary}
\label{square}
If data matrix $A$ is square and invertible, its corresponding candidate equation is valid.
\end{corollary}
\begin{proof}
If $A$ is square and invertible, we have $A^+A=I=AA^+$. As a result, $(AA^+-I)\vec{b}=[0]\vec{b}=\vec{0}$. By theorem $\ref{leftinverse}$, the corresponding candidate equation is valid.
\end{proof}
\begin{corollary}
The equation $(AA^+-I)\vec{b}=\vec{0}$ is valid if and only if $1.\vec{b}$ is the eigenvalue, eigenvector pair of matrix $AA^+$.
\end{corollary}
\begin{proof}
Assume $(AA^+-I)\vec{b}=\vec{0}$. As a result, since $\vec{b} \neq \vec{0}$ because of our equation candidate format, $\rVert AA^+-I \rVert=0$, and so $\vec{b}$ must be an eigenvector of $AA^+-I$.
$\\$
Assume $\vec{b}$ is an eigenvector of $AA^+-I$. Thus, by definition, $AA^+\vec{b}=\vec{b}$, and so $AA^+\vec{b}-\vec{b}=(AA^+-I)\vec{b}=\vec{0}$.
\end{proof}
Next we see that by re-expressing equation validation and constant determination as linear algebraic operations, the computational complexity of the validation question (definition \ref{valideq}) can be determined.
\subsection{Computational Complexity}
We describe this problem as validating an equation candidate in the family of equation candidates involving elements of $F$, $|F| = n$, of size at most $s$, and of degree at most $d$ that satisfies the data of $r$ experiments. We assume that all additive and multiplicative operations are floating point operations.
Evaluating each equation $A_{j,k}, 1 \leq j \leq s-1, 1 \leq k \leq r$, takes $nd$. Thus, constructing the matrix equation $A$ will take O($rsdn$) time.
$\\$
As a result, determining whether a given equation candidate is valid involves calculating $A^*A$, determining the existence of $(A^*A)^{-1}$, calculating $AA^+$, and the value of $(AA^+-I)\vec{b}$. These operations are done in O($rs^2$), O($s^3$), O($rs^2$), and O($rs$) time respectively, where $s$ is the size of the equation candidate and $r$ is the number of experiments. By (corollary $\ref{sr})$, we see that $r \geq s-1$. Thus, if
\begin{equation}
\label{tdef}
t=\max(dn,r),
\end{equation}
the running time of checking whether an equation candidate is valid is O($tr^2$). We can now develop a search algorithm that applies this algorithm repeatedly to many different candidate equations to find one that is valid.
\section{Equation Candidate Search}
\label{algo}
\subsection{Exhaustive Search Algorithm}
\label{exh}
We have demonstrated an algorithm that can verify whether a given candidate equation is valid. However, the second half of the problem of deriving algebraic equations that fit our data set is a search algorithm that finds valid equation candidates in a certain family of candidate equations. Referring to \cite{sentences}, we can denote this problem as finding a valid equation candidate in the family of equation candidates involving elements of $F$, $|F| = n$, of size at most $s$, and of degree at most $d$ that satisfies the data of $r$ experiments, where $r \geq s-1$ $($cor $\ref{sr})$. We also bound $d$ such that $d \leq {r / n}$.
$\\$
We describe an exhaustive search algorithm for this problem as follows.
$\\$
\begin{enumerate}
\item Begin by finding all valid equation candidates of size 1. This simply entails finding all $F_i \in F$ such that $F_{i,t}=0$ for each experiment $t$.
\item Then find all valid equation candidates of size 2. This is begun by enumerating through all
$\\$
$A_1=F_1^{f_1}\dots F_n^{f_n}, 0 \leq f_j \leq d, 1 \leq j \leq n$, and no $F_j$ is such that $F_{j,t}=0$ for each experiment $t$.
$\\$
For each $A_1$ we generate, we choose each $A_2$ generated as above such that $A_1 \neq A_2$. We then obtain a list of all possible equation candidates of the form $A_1 + A_2 = 0$.
\item For each candidate $A_1 + A_2 = 0$, we apply (\ref{leftinverse}) to verify the equation candidate is valid.
\item To find all valid equation candidates of size $k \leq s$, for each instance of $A_1 + \dots + A_{k-1} = 0$, we add an instance of $A_k$ generated as in step 2 such that $A_k \neq A_1, \dots,A_{k-1}$,
\item Repeat the inductive step until you generate all candidate equations of size at most $s$.
\end{enumerate}
To generate each $A_i$, it requires at most $n^d$ steps. Since $dn \leq r$, from (\ref{tdef}), we see that $t = r$. Thus, to check the validity of all candidate equations of size at most $s$, the number of steps is
\begin{equation}
\label{exhcomp}
nr+\sum_{i=2}^s n^{id} tr^2 \approx n^{ds} tr^2 \leq n^{\frac{rs}{n}} r^3=\left(n^{\frac{1}{n}}\right)^{rs} r^3.
\end{equation}
Combining (\ref{exhcomp}) with the fact that $e^{\frac{1}{e}} \geq x^{\frac{1}{x}}$ for all $x \in R$ yields
\begin{equation}
\label{exhcomplex}
\left(n^{\frac{1}{n}}\right)^{rs} r^3 \leq \left(e^{\frac{1}{e}}\right)^{rs} r^3.
\end{equation}
Using (\ref{exhcomp}) and (\ref{exhcomplex}), we see that the time complexity of this algorithm is $e^{O(1)rs}$.
$\\$
One drawback of this exhaustive search algorithm is that, in order to enumerate through all equations, the exponents of each variable of each additive term must be enumerated through as well. \textbf{\textit{Using a property of finite fields, there is a method to find all valid equation candidates without parsing through all exponents of a variable.}}
\subsection{Finite Field Sieve Algorithm}
\subsubsection{Introduction to Finite Fields}
We will explain some relevant properties of finite fields $\cite{fforder}$. A finite field of order $p$ is some set $F_p$ such that $|F_p|=p$, and two operations $+,*$ that satisfy some properties: For all $s_1,s_2,s_3 \in F_p$,
\begin{enumerate}
\item $s_1*s_2,s_1+s_2 \in F_p$.
\item $(s_1+s_2)+s_3=s_1+(s_2+s_3). (s_1*s_2)*s_3=s_1*(s_2*s_3)$
\item $s_1+s_2=s_2+s_1$, $s_1*s_2=s_2*s_1$.
\item There exists a unique $0,1 \in F_p$ such that $0+s_1=s_1$,$1*s_1=s_1$.
\item There exists a unique $s_3,s_4$ such that $s_3+s_1 = 0$ and $s_4*s_1=1$.
\item $s_1*(s_2+s_3)=s_1*s_2+s_1*s_3$.
\end{enumerate}
One important property of a finite field is the order of an element $s \in F_p$. This is defined as the least exponent $d \in Z^+$ such that $s^d=1$. In a finite field where $p$ is prime, every non-zero element in $F_p$ order $1$ or a divisor of $p-1$.
\subsubsection{Algorithm Preliminaries}
Assume we have some valid equation candidate of some degree $d$. If the data in our data sets are of some finite field $F_p$, where $p$ is prime, then each $A_i=F_1^{f_1}\dots F_n^{f_n}$ in our candidate is congruent to $F_1^{f_1'}\dots F_n^{f_n'}$, where $f_1'=f_1$ mod p-1, $\dots$, $f_n'=f_n$ mod p-1. As a result, if a verification algorithm can be performed in a finite field, we can bound the exponents of the family of equation candidates to be searched through.
$\\$
We then show that, through a modified validation algorithm, it is possible to obtain a set of equation candidates such that one of those is a valid equation candidate.
\begin{theorem}
\label{ff}
Let there exists a homomorphism
\begin{equation}
\label{homo}
\varphi: Q \rightarrow Z/pZ,
\end{equation} where $p$ is prime, and let each $F_j \in Q,1 \leq j \leq n$. Also, let each $F_j'=\varphi(F_j), 1 \leq j \leq n$, and
\begin{equation}
A_i'=F_1'^{f_1}\dots F_n'^{f_n}, f_j \in Z^{+} \cup \{0\}, 1 \leq j \leq n.
\end{equation}
If $A_1 + \dots + A_s = 0$ is a valid equation candidate, then there exists a solution $\vec{x'} \in (Z/pZ)^{s-1}$ such that
\begin{equation}
\left[ \begin{array}{ccc}
A_{2,1}' & \dots & A_{s,1}'\\
\vdots & \ddots & \vdots \\
A_{2,r}' & \dots & A_{s,r}'\end{array}\right]
\vec{x'}
=\left[ \begin{array}{c}
-A_{1,1}' \\
\vdots \\
-A_{1,r}' \end{array}\right].
\end{equation}
\end{theorem}
\begin{proof}
Let $A$ be the equation matrix of $A_1 + \dots + A_s$ and
\[
\vec{b}=
\left[ \begin{array}{c}
-A_{1,1} \\
\vdots \\
-A_{1,r} \end{array}\right].
\]
We see that if there exists a unique $\vec{x} \in Q^{s-1}$ such that $A \vec{x}-\vec{b}=\vec{0}$, then $\exists k \in Z^+$, the least common denominator of all entries of $A$ and $\vec{b}$ such that $k \left( A \vec{x}-\vec{b} \right)=k\vec{0}=\vec{0}$, with $kA \in Z^{r,s-1},k\vec{b} \in Z^{r}$.
$\\$
We then see that $\exists \vec{x}'= k\vec{x}$ mod p such that $kA\vec{x}'-k\vec{b}=\vec{0}$ mod p.
$\\$
Define homomorphisms $\phi: Q \rightarrow kQ$, $\phi(q)=kq$, and $\Phi: Z \rightarrow Z/pZ$, $\Phi(kq)=kq$ mod p. Thus, we can define $\varphi: Q \rightarrow Z/pZ$ as $\varphi(q)=\Phi(\phi(q))$.
$\\$
Thus, we have that $\Phi(\phi(A \vec{x}-\vec{b}))=\varphi(A)\varphi(\vec{x})-\varphi(\vec{b})=\Phi(\phi(\vec{0}))=\vec{0}$, so there exists $\varphi(\vec{x})$ such that $\varphi(A)\varphi(\vec{x})=\varphi(\vec{b})$.
$\\$
\end{proof}
We can modify the algorithm in (\ref{leftinverse}) for solving a linear system $A\vec{x}=\vec{b}$ over a finite field in O$(tr^2)$ time if such a solution exists, where the dimensions of $A$ are $r \times s$ $\cite{ffsolve}$. As a result, there exists a polynomial time algorithm to "validate" the equation over the finite field.
$\\$
\subsubsection{Algorithm Description}
We will outline an algorithm to search for valid equation candidates of size at most $s$ and degree at most $d$ that satisfies the data of $r$ experiments, where $r \geq s-1$. Let $p \leq \sqrt{d}$ be some prime. In addition, bound $d$ such that $d \leq {r / n}$. Assume that all elements of our data set are floating point numbers and all operations are floating point operations.
$\\$
\begin{enumerate}
\item Multiply each entry in the data set by a common denominator $10^k,k \in Z^+$.
\item Take the modulo $3$ of all elements in the data set and place all values in a duplicate data set $D_3$.
\item Perform the exhaustive search algorithm in section $\ref{exh}$ on equation candidates of size $s$, degree at most $3-1=2$, that satisfies the data in $D_3$ of the $r$ experiments. For the verification algorithm, use (\ref{leftinverse}), but modified to apply to finite fields $\cite{ffsolve}$, to write down the validated equation candidates in $F_3$ in some list $EqF_3$.
\item Repeat steps 2 and 3 for a finite field of order 5, order 7, $\dots, p$.
\item Take equation candidates of size at most $s$ and degree $d$ such that, upon taking modulo 3 of each exponent, yields an equation candidate in $EqF_3$, taking modulo 5 of each exponent, yields an equation candidate in $EqF_5$, $\dots$, taking modulo p of each exponent, yields an equation candidate in $EqF_p$, and write them down in $FFV$.
\item Validate the equation candidates in $FFV$ using the original data set and the algorithm denoted in $\ref{leftinverse}$ to obtain the valid equation candidates.
\end{enumerate}
$\\$
To multiply entry in the data set by a common denominator $10^k,k \in Z^+$ takes $nr$ operations. Taking the modulo $p$ of each operation takes $nr$O$(1)$ operations. Since $dn \leq r$, from (\ref{tdef}), we see that $t = r$. We then see that $p \leq \sqrt{d} \leq \sqrt{{r / n}}$.
Thus, using (\ref{exhcomp}) and (\ref{exhcomplex}), checking the validity of all candidate equations of size at most $p$ is approximately
\begin{equation}
\label{ff1}
n^{\sqrt{\frac{r}{n}}}r^3) \approx e^{O(1)\sqrt{r}s}.
\end{equation}
$\\$
To find all equation candidates in $FFV$ involves first finding equation candidates in $EqF_3$ equivalent to, with respect to modulo 3 of the exponents, equation candidates in $EqF_5$, with respect to modulo 5 of the exponents, and so on, and placing them in $FFV'$. This is accomplished by solving $n$ linear equations, at most $p$ times, on at most $n^{3s}$ equation candidates. This takes $n^{3s+1}O(d^2)p$. This shows that $|FFV'|=k$ is approximately constant.
$\\$
We also must take into account that the exponents of those equation candidates in $FFV'$ are in modulo $p$. For each exponent, there are $p$ possible exponents because $p \leq \sqrt{d}$. As a result, there are $k\left(n^{s}\right)^p$ equation candidates in $FFV$ that we must validate using the algorithm denoted in $\ref{leftinverse}$. Thus, using (\ref{exhcomp}) and (\ref{exhcomplex}), the running time is \begin{equation}
\label{ff2}
k\left(n^{s}\right)^p r^3 \approx e^{O(1)s\sqrt{r}} r^3.
\end{equation}
$\\$
Thus, adding the running times of (\ref{ff1}) and (\ref{ff2}) yields the running time of the Finite Field Sieve, which is
\begin{equation}
\label{ff_final}
e^{O(1)\sqrt{r}s}.
\end{equation}
\section{Numerical Results}
\label{compmeth}
Equation candidates for the van der Waals equation of state \cite{vanderwaals},
\begin{equation}
\begin{split}
\left(P+a\left(\frac{n}{V}\right)^2 \right)\left(\frac{V}{n}-b\right) =
RT \implies
\\
PV^3 - bnPV^2+an^2V-abn^3-RnV^2T =
\\
PV^3+k_2nPV^2+k_3n^2V+k_4n^3+k_5nV^2T=0,
\end{split}
\end{equation}
\noindent were generated and verified using simulated data.
$\\$
\noindent The pressure $P$, volume $V$, and number of moles $n$ for 20 different virtual experiments were generated randomly. The van der Waals coefficients $a=2.45 \times 10^{-2},b=2.661\times 10^{-5}$ were values for hydrogen, the gas constant $R$ was set as 8.3145. For each experiment, the temperature $T$ was calculated.
$\\$
The set of inputs and outputs that compose our candidate equations is $L \cup F = \{P,V,n,T\}$. The data points are recorded in Table \ref{table_Waals}.
\begin{table}[h]
\centering
\caption{Recorded Data Points for Van Der Waals Test} \label{table_Waals}
\begin{tabular}{|c | c | c | c | c|}
\hline
Experiment Number & P & V & T & n \\ [0.5ex]
\hline\hline
1 & 3 & 2 & 2 & 0.186219 \\
\hline
2 & 2 & 3 & 4 & 0.362773\\
\hline
3 & 4 & 4 & 5 & 0.38854\\
\hline
4 & 5 & 1 & 8 & 0.0987221 \\
\hline
5 & 6 & 5 & 1 & 3.60872\\
\hline
6 & 7 & 11 & 2.5 & 3.70502\\
\hline
7 & 9 & 3 & 2.2 & 1.4782\\
\hline
8 & 2.5 & 4.1 & 7.3 & 0.174113\\
\hline
9 & 5.3 & 6.4 & 9.7 & 0.425028\\
\hline
10 & 4.4 & 3.2 & 8.2 & 0.214052\\[1ex]
\hline
\end{tabular}
\end{table}
We searched in the family of algebraic equation candidates of additive size $5$ and exponent order $5$.
The exhaustive search and finite field sieve methods were used to generate candidate equations and apply their respective search methods to find the valid equation candidates.
$\\$
For various valid and invalid candidates related to the van der Waals equation, the existence of the Moore-Penrose inverse and validity of the subsequent equation $(AA^+-I)\vec{b}=0$ for each of their corresponding data matrices was verified in Julia.
$\\$
In determining the existence of the Moore-Penrose inverse, Gaussian elimination is applied to the matrix $A^*A$. In turning the matrix into row echelon form, if the algorithm detects a diagonal entry that is within some bound $\epsilon=0.0001$ of zero, the matrix is determined to be not invertible. If all diagonal entries are outside that bound, the matrix is considered left invertible.
$\\$
The left Moore-Penrose pseudoinverse is calculated as $(A^*A)^{-1}A^*$. $(A^*A)^{-1}$ is calculated by performing the same Gauss-Jordan operations in reducing matrix $A^*A$ to reduced row echelon form to the identity matrix.
$\\$
In determining the validity of some equation candidate, we compute, based on the values of the equation matrix $A$, pseudoinverse $A^+$, and $\vec{b}$, the vector
\begin{equation}
\label{endmat}
\vec{b'} = AA^+ \vec{b}.
\end{equation}
Construct some vector $\vec{c}$ using (\ref{endmat}), such that \begin{equation}
\label{conc}
\vec{c}[i] = \frac{\vec{b}[i] - \vec{b'}[i]}{\vec{b}[i]}, \quad 1 \leq i \leq \dim(\vec{b}).
\end{equation}
After the construction of $\vec{c}$ from (\ref{conc}), if the result of $\rVert \vec{c} \rVert$ is within some bound $\epsilon = 0.0001$ of zero, the associated candidate equation is judged as valid. If the equation is outside the bound, the equation candidate is judged as not valid.
$\\$
Below is a selection of outputs by the exhaustive search algorithm.
$\\$
$\\$
$\vdots$
$\\$
$PV + PVT + NT \implies k_2= 2.3142 \quad k_3 = -2.1348$
$\\$
$PV + PVT + NT^2 \implies k_2 = 1.7452 \quad k_3 = -1.3145$
$\\$
$\vdots$
$\\$
$PV^3+bnPV^2+n^2V+n^3+nV^2T \implies k_2= -2.661\times 10^-5 \quad k_3 = 2.45 \times 10^-2 $
$\\$
$k_4 = 6.51935 \times 10^-7 \quad k_5 = -8.3145$
$\\$
$\\$
In applying the exhaustive search and finite field sieve, both algorithms yielded the same valid equation $PV^3+nPV^2+n^2V+n^3+nV^2T$ when $\epsilon = 0.0001$. Both algorithms also rejected all other equation candidates in our search family of equations.
\section{Concluding Remarks}
\label{conclusion}
We have demonstrated a novel approach in deriving compact laws from a data set. In this approach, an algebraic model is derived to verify whether a certain algebraic equation fits the data set. With this model, we solve the problem of determining constants for equations with additive terms. We have also developed algorithms to parse through a family of algebraic equations, one by exhaustive search and the other based on the finite field sieve, which is more efficient. In addition, we prove that both algorithms are guaranteed to converge in finding an equation that describes a data set if the equation belongs to the family in which the algorithms are applied.
$\\$
$\\$
The devised algebraic equation verification algorithm runs in O($tr^2$) time. $r$ is the number of experiments done, and $t = \max(dn,r)$, with $d$ being the number of additive terms of the candidate, and $m,n$ being the number of input and output variables in the data. We have proved that the question of equation validation with respect to constants is exactly the question of the existence of the pseudoinverse $A^+$ and the calculation of $(AA^+ - I)\vec{b}$ (Theorem \ref{leftinverse}). A numerical method was devised to determine whether a pseudoinverse exists with respect to a bound. Another method was used to determine whether $(AA^+ - I)\vec{b}$ is within some bound of $\vec{0}$ using squared residuals. This algorithm succeeded in validating the van der Waals equation and calculating its constants, while rejecting all other equation candidates. The exhaustive search algorithm runs in $e^{O(1)rs}$ time, where $r$ is the number of experiments in the data and $s$ is the maximum number of additive terms in the family of equations to be searched. The finite field sieve algorithm improves on our exhaustive search algorithm by converting our data into elements of a finite field to reduce the number of equations needed to be searched. This algorithm runs in $e^{O(1)\sqrt{r}s}$ time, which makes the algorithm run in sub-exponential time with respect to the number of experiments. Both algorithms searched through the same family of algebraic equations relating to the van der Waals equation and converged on the valid equation.
$\\$
$\\$
The proposed approach transforms the problem of deriving meaning from data into formulating a linear algebra model and finding equations that fit the data. Such a formulation allows the finding of equation candidates in an explicit mathematical manner. For a certain type of compact theory, our approach assures convergence, with the discovery being computationally efficient and mathematically precise. However, several limitations exist. One is that not all natural laws are in algebraic form. For example, for an RLC circuit \cite{rlc}, which includes exponents, sinusoids, and complex numbers, applying our algorithms to data of this type would yield a Taylor approximation of the equation, which may not be accurate for larger values. Another lies in the fact that the Finite Field Sieve algorithm is still relatively slow due to it being exponential with respect to the maximum number of additive terms to be searched through. These problems may limit the algorithm's effectiveness on certain data sets. To improve this approach, further work includes finding a mathematical analogue of this process applicable to vectors. Currently, all vector algebraic equations must be found component-wise. In addition, applying certain transforms (e.g.,~Fourier or Laplace) on exponential, logarithmic, or sinusoidal equation candidates may expand the number of data types that can fit an equation.
$\\$
$\\$
We believe the most promising direction is determining a search algorithm based on linear algebra such that valid candidate equations can be discovered with high probability. This is due to the candidate equation evolution dependent on multiplication by diagonal matrices, which do not change a matrice's eigenvector space \cite{linalg}. There are several potential search criterion that are worthy of study to use as a tool for supervised training. This may lead to a probabilistic algorithm that runs in polynomial time. This re-expression of the problem of derivation of natural laws from data into a linear algebra model creates enormous potential for refining constant evaluation,
search, and learning algorithms.
\section{Materials}
The Julia code is available with this pre-print on arXiv. The code is distributed under a Creative Commons Attribution 4.0 International Public License. If you use this work please attribute to Wenqing Xu and
Mark Stalzer, Deriving compact laws based on algebraic formulation of a data set, arXiv, 2017.
\end{document}
|
math
|
#include "musicsongslistplaywidget.h"
#include "musicsongtag.h"
#include "musicsongstoolitemrenamedwidget.h"
#include "musicobject.h"
#include "musicuiobject.h"
#include "musicstringutils.h"
#include "musicwidgetutils.h"
#include "musicsettingmanager.h"
#include "musicapplication.h"
#include "musicleftareawidget.h"
#include "musictinyuiobject.h"
#include "musicsplititemclickedlabel.h"
#include "musicwidgetheaders.h"
#include <QTimer>
MusicSongsListPlayWidget::MusicSongsListPlayWidget(int index, QWidget *parent)
: QWidget(parent), m_renameLine(nullptr)
{
QPalette pal = palette();
pal.setBrush(QPalette::Base, QBrush(QColor(0, 0, 0, 40)));
setPalette(pal);
setAutoFillBackground(true);
m_noCover = false;
m_currentPlayIndex = index;
m_totalTimeLabel = QString("/") + MUSIC_TIME_INIT;
QPushButton *addButton = new QPushButton(this);
addButton->setGeometry(2, 25, 16, 16);
addButton->setStyleSheet(MusicUIObject::MQSSTinyBtnPlayLater);
addButton->setCursor(QCursor(Qt::PointingHandCursor));
addButton->setToolTip(tr("playLater"));
m_artistPictureLabel = new QLabel(this);
m_artistPictureLabel->setFixedSize(60, 60);
m_artistPictureLabel->setAttribute(Qt::WA_TranslucentBackground);
m_artistPictureLabel->setGeometry(20, 0, 60, 60);
m_songNameLabel = new MusicSplitItemClickedLabel(this);
m_songNameLabel->setAttribute(Qt::WA_TranslucentBackground);
m_songNameLabel->setStyleSheet(MusicUIObject::MQSSColorStyle01);
m_songNameLabel->setGeometry(85, 5, 200, 25);
m_timeLabel = new QLabel(this);
m_timeLabel->setFixedSize(100, 20);
m_timeLabel->setAttribute(Qt::WA_TranslucentBackground);
m_timeLabel->setStyleSheet(MusicUIObject::MQSSColorStyle01);
m_timeLabel->setGeometry(85, 37, 100, 20);
m_downloadButton = new QPushButton(this);
m_downloadButton->setGeometry(175, 40, 16, 16);
m_downloadButton->setCursor(QCursor(Qt::PointingHandCursor));
m_downloadButton->setToolTip(tr("songDownload"));
currentDownloadStateClicked();
m_showMVButton = new QPushButton(this);
m_showMVButton->setGeometry(211, 39, 16, 16);
m_showMVButton->setStyleSheet(MusicUIObject::MQSSTinyBtnMV);
m_showMVButton->setCursor(QCursor(Qt::PointingHandCursor));
m_showMVButton->setToolTip(tr("showMV"));
m_loveButton = new QPushButton(this);
m_loveButton->setGeometry(231, 40, 16, 16);
m_loveButton->setCursor(QCursor(Qt::PointingHandCursor));
m_loveButton->setToolTip(tr("bestlove"));
currentLoveStateClicked();
m_deleteButton = new QPushButton(this);
m_deleteButton->setGeometry(251, 40, 16, 16);
m_deleteButton->setStyleSheet(MusicUIObject::MQSSTinyBtnDelete);
m_deleteButton->setCursor(QCursor(Qt::PointingHandCursor));
m_deleteButton->setToolTip(tr("deleteMusic"));
m_moreButton = new QPushButton(this);
m_moreButton->setGeometry(271, 39, 16, 16);
m_moreButton->setStyleSheet(MusicUIObject::MQSSPushButtonStyle13 + MusicUIObject::MQSSTinyBtnMore);
m_moreButton->setCursor(QCursor(Qt::PointingHandCursor));
m_moreButton->setToolTip(tr("moreFunction"));
#ifdef Q_OS_UNIX
addButton->setFocusPolicy(Qt::NoFocus);
m_downloadButton->setFocusPolicy(Qt::NoFocus);
m_showMVButton->setFocusPolicy(Qt::NoFocus);
m_loveButton->setFocusPolicy(Qt::NoFocus);
m_deleteButton->setFocusPolicy(Qt::NoFocus);
m_moreButton->setFocusPolicy(Qt::NoFocus);
#endif
QMenu *menu = new QMenu(this);
createMoreMenu(menu);
m_moreButton->setMenu(menu);
connect(m_loveButton, SIGNAL(clicked()), MusicApplication::instance(), SLOT(musicAddSongToLovestListAt()));
connect(m_downloadButton, SIGNAL(clicked()), MusicLeftAreaWidget::instance(), SLOT(musicDownloadSongToLocal()));
connect(m_deleteButton, SIGNAL(clicked()), parent, SLOT(setDeleteItemAt()));
connect(this, SIGNAL(renameFinished(QString)), parent, SLOT(setItemRenameFinished(QString)));
connect(this, SIGNAL(enterChanged(int,int)), parent, SLOT(itemCellEntered(int,int)));
connect(m_showMVButton, SIGNAL(clicked()), parent, SLOT(musicSongPlayedMovieFound()));
connect(addButton, SIGNAL(clicked()), parent, SLOT(musicAddToPlayLater()));
connect(MusicLeftAreaWidget::instance(), SIGNAL(currentLoveStateChanged()), SLOT(currentLoveStateClicked()));
connect(MusicLeftAreaWidget::instance(), SIGNAL(currentDownloadStateChanged()), SLOT(currentDownloadStateClicked()));
}
MusicSongsListPlayWidget::~MusicSongsListPlayWidget()
{
delete m_renameLine;
delete m_artistPictureLabel;
delete m_songNameLabel;
delete m_timeLabel;
delete m_loveButton;
delete m_deleteButton;
delete m_showMVButton;
delete m_downloadButton;
delete m_moreButton;
}
void MusicSongsListPlayWidget::updateTimeLabel(const QString ¤t, const QString &total)
{
if(m_totalTimeLabel.contains(MUSIC_TIME_INIT))
{
m_totalTimeLabel = total;
}
m_timeLabel->setText(current + m_totalTimeLabel);
}
void MusicSongsListPlayWidget::updateCurrentArtist()
{
if(!m_noCover && M_SETTING_PTR->value(MusicSettingManager::OtherUseAlbumCover).toBool())
{
return;
}
const QString &name = m_songNameLabel->toolTip().trimmed();
if(!showArtistPicture(MusicUtils::String::artistName(name)) && !showArtistPicture(MusicUtils::String::songName(name)))
{
m_artistPictureLabel->setPixmap(QPixmap(":/image/lb_defaultArt").scaled(60, 60));
}
}
void MusicSongsListPlayWidget::setParameter(const QString &name, const QString &path, QString &time)
{
MusicSongTag tag;
const bool state = tag.read(path);
m_songNameLabel->setText(MusicUtils::Widget::elidedText(font(), name, Qt::ElideRight, 198));
m_songNameLabel->setToolTip(name);
if(state)
{
time = tag.getLengthString();
m_totalTimeLabel = "/" + time;
}
m_timeLabel->setText(MUSIC_TIME_INIT + m_totalTimeLabel);
if(state && M_SETTING_PTR->value(MusicSettingManager::OtherUseAlbumCover).toBool())
{
QPixmap pix = tag.getCover();
if(pix.isNull())
{
m_noCover = true;
}
else
{
m_noCover = false;
m_artistPictureLabel->setPixmap(pix.scaled(60, 60));
return;
}
}
if(!showArtistPicture(MusicUtils::String::artistName(name)) && !showArtistPicture(MusicUtils::String::songName(name)))
{
m_artistPictureLabel->setPixmap(QPixmap(":/image/lb_defaultArt").scaled(60, 60));
}
}
void MusicSongsListPlayWidget::setItemRename()
{
m_renameLine = new MusicSongsToolItemRenamedWidget(m_songNameLabel->toolTip(), this);
connect(m_renameLine, SIGNAL(renameFinished(QString)), SLOT(setChangItemName(QString)));
m_renameLine->setGeometry(85, 5, 200, 25);
m_renameLine->show();
}
void MusicSongsListPlayWidget::deleteRenameItem()
{
delete m_renameLine;
m_renameLine = nullptr;
}
void MusicSongsListPlayWidget::setChangItemName(const QString &name)
{
m_songNameLabel->setText(MusicUtils::Widget::elidedText(font(), name, Qt::ElideRight, 198));
m_songNameLabel->setToolTip(name);
Q_EMIT renameFinished(name);
QTimer::singleShot(MT_MS, this, SLOT(deleteRenameItem()));
}
void MusicSongsListPlayWidget::currentLoveStateClicked()
{
const bool state = MusicApplication::instance()->musicLovestContains();
m_loveButton->setStyleSheet(state ? MusicUIObject::MQSSTinyBtnLove : MusicUIObject::MQSSTinyBtnUnLove);
}
void MusicSongsListPlayWidget::currentDownloadStateClicked()
{
bool state = false;
MusicApplication::instance()->musicDownloadContains(state);
m_downloadButton->setStyleSheet(state ? MusicUIObject::MQSSTinyBtnDownload : MusicUIObject::MQSSTinyBtnUnDownload);
}
void MusicSongsListPlayWidget::enterEvent(QEvent *event)
{
QWidget::enterEvent(event);
Q_EMIT enterChanged(m_currentPlayIndex, -1);
}
void MusicSongsListPlayWidget::createMoreMenu(QMenu *menu)
{
menu->setStyleSheet(MusicUIObject::MQSSMenuStyle02);
QMenu *addMenu = menu->addMenu(QIcon(":/contextMenu/btn_add"), tr("addToList"));
addMenu->addAction(tr("musicCloud"));
menu->addAction(QIcon(":/contextMenu/btn_similar"), tr("similar"), parent(), SLOT(musicPlayedSimilarQueryWidget()));
menu->addAction(QIcon(":/contextMenu/btn_share"), tr("songShare"), parent(), SLOT(musicSongPlayedSharedWidget()));
menu->addAction(QIcon(":/contextMenu/btn_kmicro"), tr("KMicro"), parent(), SLOT(musicSongPlayedKMicroWidget()));
}
bool MusicSongsListPlayWidget::showArtistPicture(const QString &name) const
{
QPixmap originPath(QString(ART_DIR_FULL + name + SKN_FILE));
if(!originPath.isNull())
{
m_artistPictureLabel->setPixmap(originPath.scaled(60, 60));
return true;
}
return false;
}
|
code
|
\begin{document}
\title{\textbf{Stable blow-up solutions for the SO(d)-equivariant supercritical Yang-Mills heat flow}}
\author{Yezhou Yi\footnote{School of Mathematical Sciences, University of Science and Technology of China, Hefei,
Anhui, 230026, PR China, yiyezh@mail.ustc.edu.cn}}
\date{}
\maketitle
\begin{onecolabstract}
\textbf{ABSTRACT}. We consider the $SO(d)$-equivariant Yang-Mills heat flow \begin{equation*}
\partial_t u-\partial_r^2 u-\frac{(d-3)}{r}\partial_r u+\frac{(d-2)}{r^2}u(1-u)(2-u)=0
\end{equation*} in dimensions $d>10.$ We construct a family of $\mathcal{C}^{\infty}$ solutions which blow up in finite time via concentration of a universal profile \begin{equation*}
u(t,r)\sim Q\left(\frac{r}{\lambda(t)}\right),
\end{equation*}where $Q$ is a stationary of the equation and the blow-up rates are quantized by \begin{equation*}
\lambda(t)\sim c_{u}(T-t)^{\frac{l}{\gamma}},\,\,\,l\in \mathbb{N}_{+},\,\,\,2l>\gamma\in (1,2).
\end{equation*}
Moreover, such solutions are in fact $(l-1)$-codimension stable under pertubation of the initial data. In particular, the case $l=1$ corresponds to a stable blow-up regime.
\end{onecolabstract}
\section{Introduction}
We denote $E$ a principle fibre bundle over a $d$ dimensional Riemannian manifold M, with a semi-simple Lie group as structure group. Denote $AdE$ the adjoint bundle to E, a connection on $E$ is the sum of a fixed background connection and a map from $M$ to $AdE\otimes T^*M.$ Denote $\mathcal{G}$ as the Lie algebra of $G,$ locally a connection $A$ is a $\mathcal{G}$-valued 1-form on the coordinate patches $U_{\alpha}$ of $M,$ as $A=A_{j}(x)\,\mathrm{d}x^j$ with $A_j\,\colon\,U_{\alpha}\rightarrow \mathcal{G}.$ Denote $D_A$ the covariant derivative with respect to $A,$ the curvature $F_A$ of a connection $A$ is defined by $F_A=D_A A,$ locally it is $\mathcal{G}$-valued 2-form $F_{j,k}\, \mathrm{d}x^j\mathrm{d}x^k,$ where
\begin{equation*}
F_{j,k}:=\partial_j A_k-\partial_k A_j+[A_j,A_k].
\end{equation*}
The Yang-Mills functional $\mathcal{F}$ is defined by
\begin{equation*}
\mathcal{F}(A)=\int_{M} F_{j,k}F^{j,k}\,\mathrm{d}vol_{M},
\end{equation*}
which is invariant under gauge transformations. The associated Euler-Lagrange equations read
\begin{equation}\label{Yang-Mills connections equation}
D^jF_{j,k}=0,
\end{equation}
where $D_j=:\partial_j+[A_j,\cdot].$ Solutions of (\ref{Yang-Mills connections equation}) are referred to as Yang-Mills connections. One way to find Yang-Mills connections is to study the $L^2$-gradient flow associated with $\mathcal{F},$ i.e. the initial value problem
\begin{equation}\label{eq of gradient flow}
\left\{\begin{aligned}
&\partial_tA_j(t,x)=-D^kF_{j,k}(t,x)\\ &A_j(0,x)=A_{0j}(x)
\end{aligned}\right.
\end{equation}
for some initial connection $A_0.$ (\ref{eq of gradient flow}) is referred to as the Yang-Mills heat flow.\par
In this paper, we consider (\ref{eq of gradient flow}) in the situation $M=\mathbb{R}^d,$ $G=SO(d),$ $E$ is the trivial bundle $\mathbb{R}^d\times SO(d),$ and we investigate connections given by
\begin{equation}\label{def of sod equivariant connnection}
A_j(x)=\frac{u(r)}{r^2}\sigma_j(x),
\end{equation}
where $r=|x|,$ $u$ is a real-valued function on $[0,\infty),$ and $\{\sigma_j\}_{j=1}^d$ are a basis for the Lie algebra $so(d),$ given by
\begin{equation*}
(\sigma_j)_{\beta}^{\alpha}=\delta_j^{\alpha}x^{\beta}-\delta_j^{\beta}x^{\alpha},\,\,\,\text{for}\,\,\,1\le \alpha, \beta\le d.
\end{equation*}
Note that connections satisfying (\ref{def of sod equivariant connnection}) are equivariant with respect to $SO(d)$-action, and are reffered to as $SO(d)$-equivariant connections.
In this situation, (\ref{eq of gradient flow}) becomes
\begin{equation}\label{eq:Y-M heat}
\left\{\begin{aligned}
&\partial_t u-\partial_r^2 u-\frac{(d-3)}{r} \partial_r u+\frac{(d-2)}{r^2}u(1-u)(2-u)=0\\ &u(0,\cdot)=u_0(\cdot)
\end{aligned}\right. .
\end{equation}We credit Dumitrascu \cite{dumitrascu1982equivariant} for the first derivation of equivariant super-critical Yang-Mills equation, the readers can also refer to Weinkove \cite{weinkove2004singularity} for more details.\par
Let us briefly explain the meaning of energy supercritical. For any $\lambda>0,$ if $u(t,r)$ is a solution of (\ref{eq:Y-M heat}), then $u(\frac{t}{\lambda^2},\frac{r}{\lambda})$ is also a solution. Denote the energy functional of (\ref{eq:Y-M heat}) as
\[E(u(t)):=\frac12 \int_{0}^{+\infty} \left( |\partial_r u|^2+\frac{(d-2)u^2(2-u)^2}{2r^2}\right)r^{d-3}\,\mathrm{d}r,\]
then we have for any radial function $u_0\,\colon\, \mathbb{R}^d\rightarrow \mathbb{R},$
\[E\left(u_0\left(\frac{r}{\lambda}\right)\right)=\lambda^{d-4}E(u_0(r)).\]
Therefore, $d\ge 5$ corresponds to the energy supercritical cases, while $d=4$ corresponds to energy-critical case.\par
Historically, there has been a lot of work devoted to the study of Yang-Mills heat flow. In the case $d=2$ or $3,$ R\aa de \cite{rade1992yang} proved the flow of (\ref{eq of gradient flow}) exists for all time and converges to a Yang-Mills connection. In the case $d=4,$ global existence of solutions of (\ref{eq:Y-M heat}) was established by Schlatter, Struwe and Tahvildar-Zadeh \cite{schlatter1998global}, and Waldron \cite{waldron2019long} for more general geometric situations of (\ref{eq of gradient flow}). In the case $d\ge 5,$ solutions of (\ref{eq:Y-M heat}) may blow up in finite time, see the works of Naito \cite{naito1994finite}, Grotowski \cite{grotowski2001finite} and Gastel \cite{gastel2002singularities}. However, they did not give any structure of blow-up solutions while this paper manage to describe it. Weinkove \cite{weinkove2004singularity} investigated the nature of singularities of Yang-Mills heat flow over a compact manifold and showed that under some assumptions of the blow-up rate, homothetically shrinking solitons appear as blow-up limits at singular points. Such objects correspond to self-similar solutions of Yang-Mills heat flow on the trivial bundle over $\mathbb{R}^d,$ which were also described explicitly in Section 4 in \cite{weinkove2004singularity} for $5\le d\le 9.$ On
Weinkove's self-similar blow-up solutions of (\ref{eq:Y-M heat}), Donninger and Sch\"orkhuber \cite{donninger2019stable}, Glogi{\'c} and Sch{\"o}rkhuber \cite{glogic2020nonlinear} proved that these blow-up are stable when $5\le d\le 9.$\par
In this paper, we will construct blow up solutions of (\ref{eq:Y-M heat}) for $d>10.$ In fact, high dimensional blow-up phenomenon has been widely studied for various types of partial differential equations. For the semilinear heat equation
\begin{equation*}
\partial_t u=\Delta u+|u|^{p-1}u
\end{equation*}
with $d\ge 11$ and $p>1+\frac{4}{d-4-2\sqrt{d-1}}.$ Herrero and Vel{\'a}zquez \cite{herrero1994explosion} formally showed the existence of type II blow up with
\begin{equation*}
\|u(t)\|_{L^{\infty}}\sim \frac{1}{(T-t)^{\frac{2\alpha l}{p-1}}},\,\,\,l\in \mathbb{N}_{+},\,\,\,2\alpha l>1.
\end{equation*}
The formal result was clarified by the works of Mizoguchi \cite{mizoguchi2007rate}, Matano and Merle \cite{matano2009classification}, Collot \cite{collot2017nonradial}. For energy supercritical nonlinear Schr\"odinger equation in dimensions $d\ge 11,$ Merle, Rapha{\"e}l and Rodnianski \cite{merle2015type} constructed smooth blow-up solutions via a robust energy method. For the energy critical focusing nonlinear Schr\"odinger equation in space dimensions $d\ge 7,$ Jacek \cite{jendrej2017construction} proved the existence of pure two bubbles, one of the bubbles develops at scale $1,$ whereas the length scale of the other converges to $0.$ Jacek used energy-virial functional along with a well-designed approximation to operators which eliminates the unbounded part, such method has been used successfully in other dispersive equations, for example in energy critical wave equation in $d=6.$ For energy supercritical wave equation in $d\ge 11,$ Collot \cite{collot2018type} constructed blow-up with one bubble in the radial case. For hign dimensional harmonic heat flow, Ghoul, Ibrahim and Nguyen \cite{ghoul2018stability} showed one bubble blow-up exists for 1-corotational super-critical harmonic heat flow in $d\ge 7.$ Also when $d=7,$ Ghoul \cite{ghoul2017stable} proved there exists the same blow-up structure with the blow-up rate as
\begin{equation}\label{critical blow up rate}
\lambda(t)\simeq \frac{\sqrt{(T-t)}}{|\log (T-t)|}.
\end{equation}
For energy supercritical wave maps with $d\ge 7,$ one may refer to Ghoul, Ibrahim and Nguyen \cite{ghoul2018construction}. However, whether similar blow-up phenomenon happens remains open for Yang-Mills heat flow.\par
Next we introduce the main result of this paper. Denote $Q(r)$ as the ground state solution of (\ref{eq:Y-M heat}),
i.e. it satisfies the equation
\begin{equation}\label{eq:Y-M heat ground state}
\left\{ \begin{aligned}
&-\partial_r^2 Q-\frac{(d-3)}{r}\partial_r Q+(d-2)\frac{Q(1-Q)(2-Q)}{r^2}=0\\
&Q(0)=\partial_r Q(0)=0
\end{aligned}\right. .
\end{equation}
In the author's previous work \cite{yi2021asymptotic}, (\ref{eq:Y-M heat ground state}) admits a solution which satisfies the asymptotics (when $d>10$)
\begin{equation}\label{asymp: ground state}
Q(r)=\left\{\begin{aligned}
&\frac{1}{2}r^2+O(r^4)\,\,\,\text{as}\,\,\,r\rightarrow 0\\
&1-\alpha r^{-\gamma}(1+O(r^{-2\gamma}))\,\,\,\text{as}\,\,\,r\rightarrow \infty
\end{aligned}\right. ,
\end{equation}
where
\begin{equation}\label{def of gamma}
\alpha>0,\,\,\,\gamma=\gamma(d)=\frac{d-4-\sqrt{(d-6)^2-12}}{2}.
\end{equation}
Note that when $d>10,$ $\gamma\in (1,2).$\par
Our goal is to study potential blow-up phenomenon of (\ref{eq:Y-M heat}), and our main result is the following.
\begin{thm}\label{main thm}
Let $d>10,$ $\gamma$ as in (\ref{def of gamma}), let $l>\frac{\gamma}{2}$ be a large integer, denote
\begin{equation}\label{def: def of h}
\hbar:=\left[\frac{1}{2}\Big(\frac{d-2}{2}-\gamma\Big)\right],
\end{equation} given $L\gg l$ a large integer and define $\Bbbk:=L+\hbar+1.$ Then there exists a smooth radial initial data $u_0$ such that the corresponding solution to (\ref{eq:Y-M heat}) has the decomposition
\begin{equation}\label{eq: form of the solution}
u(t,r)=Q\left(\frac{r}{\lambda(t)}\right)+q\left(t,\frac{r}{\lambda(t)}\right),
\end{equation}
where \begin{equation}
\lambda(t)=c(u_0)(T-t)^{\frac{l}{\gamma}}(1+o_{t\rightarrow T}(1))\,\,\,\text{with}\,\,\, c(u_0)>0,
\end{equation}
and \begin{equation}
\lim\limits_{t\rightarrow T} \|\nabla^{\sigma} q(t)\|_{L^2(r^{d-3}\mathrm{d}r)}=0\,\,\,\text{for all}\,\,\,\sigma\in \left[2\hbar+4,2\Bbbk\right].
\end{equation}
Moreover, the blow-up solution is $(l-1)$-codimension stable.
\end{thm}\par
\begin{rk}
\normalfont
Let us briefly explain the sense of $(l-1)$-codimension stable. Our initial data is of the form
\begin{equation}\label{eq: form of initial data}
u_0=Q_{b(0)}+q_0,
\end{equation}
where $Q_b$ is a deformation of $Q$ and $b=(b_1,\cdots,b_L)$ corresponds to possible unstable directions in a suitable neighborhood of $Q.$ We will prove that for all $q_0\in \dot H^{\sigma}\cap \dot H^{2\Bbbk}$ small enough, for all $(b_1(0),b_{l+1}(0),\cdots,b_L(0))$ small enough, there exists a choice of unstable directions $(b_2(0),\cdots,b_(l)(0))$ such that the solution of (\ref{eq:Y-M heat}) with initial data (\ref{eq: form of initial data}) satisfies the conclusion of Theorem \ref{main thm}. This implies the constructed solution is $(l-1)$-codimension stable. In particular, the case $l=1$ corresponds to a stable blow-up regime.
\end{rk}\par
\begin{rk}
\normalfont
The restriction $d>10$ is due to technical reasons, so that $1<\gamma<2,$ $0<\delta:=\frac{1}{2}\Big(\frac{d-2}{2}-\gamma\Big)-\hbar<1,$ and $d-2\gamma>6$ which ensure involved estimates good enough.
\end{rk}
It appears that Yang-Mills heat flow share similar properties to harmonic heat flow in high dimensions, thus this paper has borrowed techniques from Ghoul et al. \cite{ghoul2018stability}. Note that the main idea of this paper along with Ghoul et al. \cite{ghoul2018stability} originate from Merle, Rapha{\"e}l and Rodnianski \cite{merle2015type} which seems astonishing since they studied blow-up for supercritical nonlinear Schr\"odinger equation. However, there are technical difficulties in order to get appropriate energy estimates which are supposed to decay well enough to close bootstrap after integration, especially when dealing with nonlinear term for energy estimates, those may be considered as the original part of this paper.\par
A closely related topic is the blow-up behavior for hyperbolic version of (\ref{eq:Y-M heat}):
\begin{equation}\label{eq: Yang-Mills hyperbolic}
\partial_t^2 u-\partial_r^2 u-\frac{(d-3)}{r} \partial_r u+\frac{(d-2)}{r^2}u(1-u)(2-u)=0,
\end{equation}
where we omit more general geometric backgound. Historically, the existence of blow-up for (\ref{eq: Yang-Mills hyperbolic}) was first proved by Cazenave, Shatah and Tahvildar-Zadeh in \cite{cazenave1998harmonic}, they constructed singular traveling waves by using
self-similar solutions. The self-similar blow-up for (\ref{eq: Yang-Mills hyperbolic}) is proved to be stable for all odd dimensions $d\ge 5,$ see Donninger \cite{donninger2014stable} and Glogi{\'c} \cite{glogic2021stable}. Rapha{\"e}l and Rodnianski \cite{raphael2012stable} constructed stable one bubble blow-up when $d=4,$ i.e. the energy-critical case. Also in dimension four, Jacek \cite{jendrej2017construction} constructed two bubbles, and Krieger, Schlag, Tataru \cite{krieger2009renormalization} showed the existence of a family of one bubble where the blow-up rates are a modification of the self-similar rate by a power of logarithm. We conjecture that one may establish similar results to Theorem \ref{main thm} for (\ref{eq: Yang-Mills hyperbolic}) with exactly the same blow-up rate $\lambda.$\par
This paper is organized as the following. In section \ref{On the linearized operator L}, we show fundamental calculations about the linearized operator of (\ref{eq:Y-M heat}) and establish coercivity properties which are crucial for both modulation estimates and energy estimates later. In section \ref{Construction of an approximate solution}, we construct an approximate solution and estimate error terms. In section \ref{Linearization bk for k from 1 to l}, one perturbs the modulation parameter equation by a set of easily found solutions. In later sections, it is a somewhat standard procedure in modulation methods, we decomposite the solution, describe initial data, make bootstrap assumption, then estimate modulation parameters, get the most crucial monotone estimates called energy estimates that later help us make improved bootstrap estimates and finally a topological argument concludes the main Proposition, that is, Proposition \ref{existence od sol trapped for large rescaled time}.
\subsection*{Notations}\label{Structure of the ground state and notation}
Now we introduce some notations. Denote $\Lambda Q(y):=y\partial_y Q(y).$ By (\ref{asymp: ground state}), we have
\begin{equation}\label{asymp: Lambda Q}
\Lambda Q(y)=\left\{ \begin{aligned}
&y^2+O(y^4)\,\,\,\text{as}\,\,\, y\rightarrow 0\\
&\frac{\alpha \gamma}{y^{\gamma}}\left(1+O\left(\frac{1}{y^{2\gamma}}\right)\right)\,\,\,\text{as}\,\,\, y\rightarrow \infty
\end{aligned}\right. .
\end{equation}
Denote the linearized operator $\mathscr{L}:=-\partial_y^2-\frac{(d-3)}{y}\partial_y +\frac{Z(y)}{y^2},$ where $Z(y):=(d-2)f'(Q(y))$ and $f(u):=u(1-u)(2-u).$ Note that by substituting $Q_\lambda(r):=Q(\frac{r}{\lambda})$ into (\ref{eq:Y-M heat ground state}) in the variable $y=\frac{r}{\lambda}$ and acting on $\partial_{\lambda}|_{\lambda=1}$, one gets $\mathscr{L}(\Lambda Q)=0.$
For any two radial functions $f_1$ and $f_2$, denote their inner product as $\langle f_1,f_2 \rangle:=\int_0^{\infty} f_1(y)f_2(y)y^{d-3}\,\mathrm{d}y.$ For convenience, let $\int f_1:=\int_{0}^{\infty} f_1(y) y^{d-3}\,\mathrm{d}y.$
Define $\chi$ as a smooth radial cut-off function such that $\chi(y)=1$ for $0\le y\le 1,$ $\chi(y)=0$ for $y\ge 2$ and $0<\chi(y)<1$ for $1<y<2.$ Then we denote $\chi_M(y):=\chi(\frac{y}{M}).$
Denote $T_k:=(-1)^k (\mathscr{L}^{-1})^k (\Lambda Q),$ for $0\le k\le L.$
For any smooth radial function $g,$ denote $g_{2k}:=\mathscr{L}^k g,$ $g_{2k+1}:=\mathscr{A}\mathscr{L}^k g,$ for any $k\in \mathbb{N}.$ Denote $\mathscr{L}_{\lambda}:=-\partial_r^2-\frac{(d-3)}{r}\partial_r+\frac{Z_{\lambda}(r)}{r^2}$ and etc, then we write $g_{2k}^*:=\mathscr{L}_{\lambda}^k g,$ $g_{2k+1}^*:=\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^k g.$
For any $b_1>0,$ define $B_0:=b_1^{-\frac{1}{2}},$ $B_1:=B_0^{1+\eta},$ where $\eta\ll 1$ to be choosen later.\par
\section*{\centerline{Acknowledgement}}
The author was supported by the NSFC Grant No. 11771415.
\section{On the linearized operator $\mathscr{L}$}\label{On the linearized operator L}
In this section, we make preparations related to $\mathscr{L}.$ We shall omit some detailed proofs if they are obtained by direct computations.
\subsection{Decomposition, kernel and computation of the inverse of $\mathscr{L}$}
\begin{lem}\label{factorization of L}
$\mathscr{L}$ has a factorization as the following.\par
$\mathscr{L}$=$\mathscr{A^*}$$\mathscr{A}$ with
\begin{align}
&\mathscr{A}\omega:=\left(-\partial_y+\frac{V(y)}{y}\right)\omega=-\Lambda Q\partial_y \left(\frac{\omega}{\Lambda Q}\right),\label{def: def of A}\\
&\mathscr{A^*}\omega:=\left(\partial_y +\frac{d-3+V(y)}{y}\right)\omega=\frac{1}{y^{(d-3)} \Lambda Q}\partial_y (y^{(d-3)} \Lambda Q \omega),\label{def: def of A*}
\end{align}
where
\begin{equation}\label{def: def of V}
V(y):=\Lambda \ln(\Lambda Q)=\left\{\begin{aligned}
&2+O(y^2)\,\,\,\text{as}\,\,\,y\rightarrow 0\\
&-\gamma+O\left(\frac{1}{y^{2\gamma}}\right)\,\,\,\text{as}\,\,\,y\rightarrow \infty
\end{aligned}\right. .
\end{equation}
\end{lem}
\begin{rk}
Note that
\begin{equation}\label{commutator between L and Lambda}
\left[\mathscr{L},\Lambda\right]=2\mathscr{L}-\frac{\Lambda Z(y)}{y^2}.
\end{equation}
\end{rk}
\begin{rk}
Denote $\widetilde{\mathscr{L}}:=\mathscr{A}\mathscr{A^*}.$ By Lemma \ref{factorization of L},
\begin{equation}\label{exp of tilde L}
\widetilde{\mathscr{L}}=-\partial_y^2-\frac{(d-3)}{y}\partial_y+\frac{\widetilde{Z}(y)}{y^2},
\end{equation}
where
\begin{equation}\label{def: def of tilde Z}
\widetilde{Z}(y):=(V+1)^2+(d-4)(V+1)-\Lambda V.
\end{equation}
\end{rk}
Next we find the kernel of $\mathscr{L}.$ If $\mathscr{L}\Gamma=0,$ then $\mathscr{A}\Gamma$ lies in the kernel of $\mathscr{A^*}.$ By (\ref{def: def of A*}), $\mathscr{A}\Gamma\in Span\{\frac{1}{y^{d-3}\Lambda Q}\}.$ By definition (\ref{def: def of A}), we can impose that
\[-\partial_y \Gamma+\frac{V(y)}{y}\Gamma=\frac{-1}{y^{d-3}\Lambda Q}.\]
It has a solution of the form
\[\Gamma(y)=\Lambda Q(y)\int_{1}^{y}\frac{\mathrm{d}\xi}{\xi^{d-3}(\Lambda Q(\xi))^2},\]
and the asymptotics
\begin{equation}\label{asymp: Gamma}
\Gamma(y)\simeq\left\{\begin{aligned}
&\frac{c}{y^{d-2}}\,\,\,\text{as}\,\,\,y\rightarrow 0\\
&\frac{c}{y^{d-4-\gamma}}\,\,\,\text{as}\,\,\,y\rightarrow \infty
\end{aligned}\right. .
\end{equation}\par
Next we introduce calculations on the inverse of $\mathscr{L}.$ By standard ode theory, for any radial function $g$,
\begin{equation}\label{standard inverse of L}
\mathscr{L}^{-1}g=-\Gamma(y)\int_{0}^{y} g(x)\Lambda Q(x)x^{d-3}\,\mathrm{d}x+\Lambda Q(y)\int_{0}^{y} g(x)\Gamma(x)x^{d-3}\,\mathrm{d}x.
\end{equation}
For the convenience of calculation, there is a two-step method to compute $\mathscr{L}^{-1}$ as the following.
\begin{lem}\label{two step method to calculate the inverse of L}
Let $g\in \mathcal{C}_{rad}^{\infty},$ then $\mathscr{L}\omega=g$ can be solved by
\begin{align*}
\mathscr{A}\omega&=\frac{1}{y^{d-3}\Lambda Q}\int_{0}^{y} g(x)\Lambda Q(x)x^{d-3}\,\mathrm{d}x,\\
\omega&=-\Lambda Q\int_{0}^{y} \frac{\mathscr{A}\omega(x)}{\Lambda Q(x)}\,\mathrm{d}x.
\end{align*}
\end{lem}
\begin{pf}
\normalfont
Acting $\mathscr{A}$ on (\ref{standard inverse of L}) and making use of (\ref{def: def of A}).
\end{pf}
\subsection{Coercivity of $\mathscr{L}$}
The proof of the following Hardy inequality is similar to Lemma B.1 in \cite{merle2015type}.
\begin{lem}\label{hardy ineq}
Let $\alpha>0,$ $\alpha\neq \frac{d-4}{2}$ and $u\in \mathcal{D}_{rad}:=\{u\in \mathcal{C}_{c}^{\infty}\,\text{with radial symmetry}\},$ then
\begin{equation*}
\int_{1}^{\infty} \frac{|\partial_y u|^2}{y^{2\alpha}}\ge \Big(\frac{d-(2\alpha+4)}{2}\Big)^2 \int_{1}^{\infty} \frac{u^2}{y^{2+2\alpha}}-C_{\alpha,d}u^2(1).
\end{equation*}
\end{lem}
Then one can follow the road map in Appendix A in \cite{ghoul2018stability}. Firstly, we establish coercivity of $\mathscr{A^*}$ as the following.
\begin{lem}\label{coercivity of A*}
Let $\alpha\ge 0.$ There exists $C_{\alpha}>0$ such that for all $u\in \mathcal{D}_{rad},$ $i=0,$ $1,$ $2,$
\begin{equation*}
\int \frac{|\mathscr{A^*}u|^2}{y^{2i}(1+y^{2\alpha})}\ge C_{\alpha} \left(\int \frac{|\partial_y u|^2}{y^{2i}(1+y^{2\alpha})}+\int \frac{u^2}{y^{2i+2}(1+y^{2\alpha})}\right).
\end{equation*}
\end{lem}
Denote \begin{equation}\label{def: def of PhiM}
\Phi_M:=\sum\limits_{k=0}^L c_{k,M}\mathscr{L}^k(\chi_M \Lambda Q),
\end{equation}
where
\begin{equation}\label{def: def of ckM}
c_{0,M}:=1,\,\,c_{k,M}:=(-1)^{k+1}\cdot \frac{\sum\limits_{j=0}^{k-1}c_{j,M}\langle \mathscr{L}^j(\chi_M \Lambda Q),T_k\rangle}{\langle \chi_M \Lambda Q,\Lambda Q\rangle},\,\,1\le k\le L.
\end{equation}
Note that the choices of $c_{k,M}$ are equivalent to
\begin{equation}\label{property of Phi}
\left\{\begin{aligned}
\langle \Phi_M,\Lambda Q\rangle&=\langle \chi_M \Lambda Q,\Lambda Q\rangle\\
\langle \Phi_M,T_k\rangle&=0\,\,\,\text{for}\,\,\,1\le k\le L
\end{aligned}\right. .
\end{equation}
In particular,
\begin{equation}\label{more property of Phi}
\langle \mathscr{L}^i T_k,\Phi_M\rangle=(-1)^k \langle \chi_M \Lambda Q,\Lambda Q\rangle \delta_{i,k},\,\,\,\text{for}\,\,\,0\le i,k\le L,
\end{equation}
where \begin{equation*}
\delta_{i,k}:=\left\{\begin{aligned}
&1\,\,\, \text{if}\,\,\, i=k\\
&0\,\,\,\text{if}\,\,\, i\neq k
\end{aligned}\right. .
\end{equation*}
Then we establish coercivity of $\mathscr{A}$ as the following.
\begin{lem}\label{coercivity of A}
Let $p\ge 0,$ $i=0,$ $1,$ $2$ and $2i+2p-(d-2\gamma-4)\neq 0.$ Assume in addition $\langle u,\Phi_M\rangle=0,$ if $2i+2p>d-2\gamma-4.$ Then we have
\begin{equation*}
\int \frac{|\mathscr{A}u|^2}{y^{2i}(1+y^{2p})}\gtrsim \int \frac{|\partial_y u|^2}{y^{2i}(1+y^{2p})}+\int \frac{u^2}{y^{2i+2}(1+y^{2p})}.
\end{equation*}
\end{lem}
Next we are in place to establish coercivity of $\mathscr{L}$ as the following.
\begin{lem}\label{coercivity of L}
Let $k\in \mathbb{N},$ $i=0,$ $1,$ $2$ and $M=M(k)$ large enough. Then there exists $c_{M,k}>0$ such that for all $u\in \mathcal{D}_{rad}$ with $\langle u,\Phi_M\rangle=0$ if $2i+2k>d-2\gamma-6,$ we have
\begin{equation*}
\int \frac{|\mathscr{L}u|^2}{y^{2i}(1+y^{2k})}\ge c_{M,k}\int \left(\frac{|\partial_y^2 u|^2}{y^{2i}(1+y^{2k})}+\frac{|\partial_y u|^2}{y^{2i}(1+y^{2k+2})}+\frac{u^2}{y^{2i+2}(1+y^{2k+2})}\right),
\end{equation*}
and
\begin{equation*}
\int \frac{|\mathscr{L}u|^2}{y^{2i}(1+y^{2k})}\ge c_{M,k}\int \left(\frac{|\mathscr{A}u|^2}{y^{2i+2}(1+y^{2k})}+\frac{u^2}{y^{2i}(1+y^{2k+4})}\right).
\end{equation*}
\end{lem}
Finally we give the coercivity property of iterate of $\mathscr{L}$ as the following.
\begin{lem}\label{coercivity of iterate of L}
Let $k\in \mathbb{N},$ $M=M(k)$ large enough. Then there exists $c_{M,k}>0$ such that for any $u\in \mathcal{D}_{rad}$ with $\langle u,\mathscr{L}^m \Phi_M\rangle=0,$ $0\le m\le k-\hbar,$ we have
\begin{align*}
\mathscr{E}_{2k+2}(u):&=\int |\mathscr{L}^{k+1} u|^2\\ &\ge c_{M,k}\left(\sum\limits_{j=0}^k\int \frac{|\mathscr{L}^j u|^2}{y^4(1+y^{4(k-j)})}+\int \frac{|\mathscr{A}(\mathscr{L}^k u)|^2}{y^2}+\sum\limits_{j=0}^{k-1}\int \frac{|\mathscr{A}(\mathscr{L}^j u)|^2}{y^6(1+y^{4(k-j-1)})}\right).
\end{align*}
\end{lem}
\begin{rk}
We point out that in Lemma \ref{coercivity of iterate of L}, when verifying the case for $k=0,$ there is no need for orthogonal conditions, one just apply Lemma \ref{coercivity of A*} and Lemma \ref{coercivity of A} to get
\begin{equation*}
\int |\mathscr{L} u|^2\gtrsim \int \frac{|\mathscr{A}u|^2}{y^2}\gtrsim \int \frac{u^2}{y^4}+\int \frac{|\mathscr{A}u|^2}{y^2}.
\end{equation*}
Note that when $d>10,$ $d-2\gamma-4>2$ holds, thus the assumption in Lemma \ref{coercivity of A} meets.
\end{rk}
\subsection{Leibniz rule for the iteration of $\mathscr{L}$}
We introduce Leibniz rules for $\mathscr{L}^k$ and $\mathscr{A}\mathscr{L}^k$ as the following. One can prove it by induction on $k,$ a similar detailed proof is given in Lemma C.1 in \cite{ghoul2018stability}.
\begin{lem}\label{leibniz rule}
For any smooth radial function $\phi,$ $g$ and any $k\in \mathbb{N},$ we have
\begin{align}
\mathscr{L}^{k+1}(\phi g)&=\sum\limits_{m-=0}^{k+1} g_{2m}\phi_{2k+2,2m}+\sum\limits_{m=0}^k g_{2m+1}\phi_{2k+2,2m+1},\label{leibniz for iterate of L}\\
\mathscr{A}\mathscr{L}^k(\phi g)&=\sum\limits_{m=0}^k g_{2m+1}\phi_{2k+1,2m+1}+\sum\limits_{m=0}^k g_{2m}\phi_{2k+1,2m},\label{leibniz for A composite iterate of L}
\end{align}
\end{lem}
where for $k=0,$
\begin{align*}
\phi_{1,0}:&=-\partial_y \phi,\,\,\,\phi_{1,1}:=\phi,\\
\phi_{2,0}:&=-\partial_y^2 \phi-\frac{(d-3+2V)}{y}\partial_y \phi,\,\,\,\phi_{2,1}:=2\partial_y \phi,\,\,\,\phi_{2,2}:=\phi,
\end{align*}
for $k\ge 1,$
\begin{align*}
\phi_{2k+1,0}:&=-\partial_y \phi_{2k,0},\\
\phi_{2k+1,2i}:&=-\partial_y \phi_{2k,2i}-\phi_{2k,2i-1},\,\,\,1\le i\le k,\\
\phi_{2k+1,2i+1}:&=\phi_{2k,2i}+\frac{(d-3+2V)}{y}\phi_{2k,2i+1}-\partial_y \phi_{2k,2i+1}\,\,\,0\le i\le k-1,\\
\phi_{2k+1,2k+1}:&=\phi_{2k,2k}=\phi,\\
\phi_{2k+2,0}:&=\partial_y \phi_{2k+1,0}+\frac{(d-3+2V)}{y}\phi_{2k+1,0},\\
\phi_{2k+2,2i}:&=\phi_{2k+1,2i-1}+\partial_y \phi_{2k+1,2i}+\frac{(d-3+2V)}{y}\phi_{2k+1,2i},\,\,\,1\le i\le k,\\
\phi_{2k+2,2i+1}:&=\partial_y\phi_{2k+1,2i+1}-\phi_{2k+1,2i},\,\,\,0\le i\le k,\\
\phi_{2k+2,2k+2}:&=\phi_{2k+1,2k+1}=\phi.
\end{align*}
\section{Construction of an approximate solution}\label{Construction of an approximate solution}
Denote change of variables
\begin{equation}\label{change of variable}
\omega(s,y):=u(t,r),\,\,\,y:=\frac{r}{\lambda(t)},\,\,\,s:=s_0+\int_{0}^{t} \frac{\mathrm{d}\tau}{\lambda^2(\tau)}.
\end{equation}
We shall derive more information on the parameter $\lambda(t)$ later. Substituting (\ref{change of variable}) into (\ref{eq:Y-M heat}), we get the renormalized flow
\begin{equation}\label{eq: renormalized flow}
\partial_s \omega-\partial_y^2 \omega-\frac{(d-3)}{y}\partial_y \omega-\frac{\lambda_s}{\lambda}\Lambda \omega+\frac{(d-2)}{y^2}\omega(1-\omega)(2-\omega)=0.
\end{equation}
In this section, we construct approximate solutions with respect to (\ref{eq: renormalized flow}).
\subsection{Definition and properties of degree and homogeneous admissable functions}
Firstly, we shall sum up function properties which we will encounter a lot and express them in a systematic and unified way.
\begin{definition}
Admissible function:
we say $g\in\mathcal{C}_{rad}^{\infty}$ is admissible of degree $(p_1,p_2)\in \mathbb{N}\times \mathbb{Z}$ if\\
\textnormal{(\romannumeral1)} For $y$ close to $0$, $g(y)=\sum\limits_{k=p_1}^p c_k y^{2k+2}+O(y^{2p+4}).$\\
\textnormal{(\romannumeral2)} For $y\ge 1,$ for all $k\in \mathbb{N},$ $|\partial_y^k g(y)|\lesssim y^{2p_2-\gamma-k}.$\\
We abbreviate it as $g\sim (p_1,p_2).$
\end{definition}
Under certain operations, the degree has the following properties. One just apply Lemma \ref{two step method to calculate the inverse of L} and induction method, we shall omit the details.
\begin{lem}\label{property of degree}
Let $g$ be an admissible function of degree $(p_1,p_2)
\in \mathbb{N}\times \mathbb{Z},$ then\\
\textnormal{(\romannumeral1)} $\Lambda g\sim (p_1,p_2).$\\
\textnormal{(\romannumeral2)} $\mathscr{L}g\sim (p_1-1,p_2-1),$ for $p_1\ge 1.$\\
\textnormal{(\romannumeral3)} $\mathscr{L}^{-1}g\sim (p_1+1,p_2+1).$\\
\textnormal{(\romannumeral4)} $T_k\sim (k,k),$ for all $k\in \mathbb{N}.$\\
\textnormal{(\romannumeral5)} $\Lambda T_k-(2k-\gamma)T_k\sim (k,k-1),$ for all $k\in \mathbb{N}_{+}.$
\end{lem}
\begin{definition}
Homogeneous admissible function: Denote $L\gg 1$ be an integer and $m:=(m_1,\cdots,m_L)\in \mathbb{N}^L,$ $b:=(b_1,\cdots,b_L).$ We say that a radial function $g(b,y)$ is homogeneous of degree $(p_1,p_2,p_3)\in \mathbb{N}\times\mathbb{Z}\times\mathbb{N},$ if it is a finite linear combination of monomials $\widetilde{g}(y)\prod\limits_{k=1}^L b_k^{m_k}$ with $\widetilde{g}(y)\sim (p_1,p_2)$ and $\sum\limits_{k=1}^L km_k=p_3.$ We abbreviate it as $g\sim (p_1,p_2,p_3).$
\end{definition}
\subsection{Estimate of the approximate profile and error terms}
\begin{prop}\label{first approximation}
Let $d>10$ and $L\gg 1$ be an integer. Then there exists a small enough universal constant $b^{*}>0$ such that the following holds true. Let $b=(b_1,\cdots,b_L)\,\colon\,[s_0,s_1]\rightarrow (-b^*,b^*)^L$ be a $\mathcal{C}^1$ map with a priori bound on $[s_0,s_1]:$
\begin{equation}\label{assump on b_k}
0<b_1<b^*,\,\,\,|b_k|\lesssim b_1^k,\,\,\,\text{for}\,\,\,2\le k\le L.
\end{equation}
Then there exist profiles $S_1=0,$ $S_k=S_k(b,y),$ $2\le k\le L+2$ such that
\begin{equation}\label{form of first approxiamtion}
Q_{b(s)}(y):=Q(y)+\sum\limits_{k=1}^L b_k(s)T_k(y)+\sum\limits_{k=2}^{L+2} S_k(b,y)=:Q(y)+\Theta_{b(s)}(y)
\end{equation}
as an approximation to (\ref{eq: renormalized flow}) satisfies
\begin{equation}\label{eq: eq of first approximation}
\partial_s Q_b-\partial_y^2 Q_b-\frac{(d-3)}{y}\partial_y Q_b+b_1 \Lambda Q_b+\frac{(d-2)}{y^2}f(Q_b)=Mod(t)+\Psi_b
\end{equation}
with the following properties.\\
\textnormal{(\romannumeral1)} $Mod(t)=\sum\limits_{k=1}^L [(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}][T_k+\sum\limits_{j=k+1}^{L+2} \frac{\partial S_j}{\partial b_k}].$\\
\textnormal{(\romannumeral2)} \begin{align}
&S_k\sim (k,k-1,k),\,\,\, \text{for}\,\,\,2\le k\le L+2.\label{degree of S_k}\\ &\frac{\partial S_k}{\partial b_m}=0,\,\,\,\text{for}\,\,\,2\le k\le m\le L+2.\label{dependence for S_k with respect to b_m}
\end{align}
\textnormal{(\romannumeral3)} For all $0\le m\le L,$
\begin{equation}\label{esti of Psib in the scale B1}
\int_{y\le 2B_1} |\mathscr{L}^{\hbar+m+1}\Psi_b|^2+\int_{y\le 2B_1} \frac{|\Psi_b|^2}{1+y^{4(\hbar+m+1)}}\lesssim b_1^{2m+4+2(1-\delta)-C_L \eta}.
\end{equation}
For all $M\ge 1,$
\begin{equation}\label{esti of Psib in the scale M}
\int_{y\le 2M} |\mathscr{L}^{\hbar+m+1} \Psi_b|^2\lesssim M^{C} b_1^{2L+6}.
\end{equation}
\end{prop}
\begin{pf}
\normalfont
Define the approximate solution to (\ref{eq: renormalized flow}) as (\ref{form of first approxiamtion}). In addition, assume (\ref{assump on b_k}), $S_1=0$ and (\ref{dependence for S_k with respect to b_m}). Then we shall construct $S_k$ and verify (\ref{dependence for S_k with respect to b_m}) holds indeed. Applying (\ref{eq:Y-M heat ground state}) we get
\begin{align*}
&\partial_s Q_b-\partial_y^2 Q_b-\frac{(d-3)}{y}\partial_y Q_b+b_1 \Lambda Q_b+\frac{(d-2)}{y^2}f(Q_b)\\=&\partial_s \Theta_{b}+\mathscr{L}\Theta_{b}+b_1\Lambda Q+b_1\Lambda \Theta_{b}\\&+\frac{(d-2)}{y^2}[f(Q+\Theta_{b})-f(Q)-f'(Q)\Theta_{b}]\\ =:&A_1+A_2.
\end{align*}
Direct computation gives
\begin{align*}
A_1&=\sum\limits_{k=1}^L [(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}]T_k\\ &+\sum\limits_{k=1}^L [b_1b_k(\Lambda T_k-(2k-\gamma)T_k)+b_1\Lambda S_k]+b_1\Lambda S_{L+1}+b_1\Lambda S_{L+2}\\ &+\sum\limits_{k=1}^{L+1} \mathscr{L}S_{k+1}+\sum\limits_{k=2}^{L+2}\partial_s S_k.
\end{align*}
Note that
\begin{equation*}
\partial_s S_k=\sum\limits_{j=1}^L [(b_j)_s+(2j-\gamma)b_1b_j-b_{j+1}]\frac{\partial S_k}{\partial b_j}-\sum\limits_{j=1}^L [(b_j)_s+(2j-\gamma)b_1b_j-b_{j+1}]\frac{\partial S_k}{\partial b_j},
\end{equation*}
hence
\begin{equation*}
A_1=Mod(t)+\sum\limits_{k=1}^{L+1}[\mathscr{L}S_{k+1}+E_k]+E_{L+2},
\end{equation*}
where for $k=1,\cdots,L,$ \begin{equation*}
E_k:=b_1b_k[\Lambda T_k-(2k-\gamma)T_k]+b_1\Lambda S_k-\sum\limits_{j=1}^{k-1}[(2j-\gamma)b_1b_j-b_{j+1}]\frac{\partial S_k}{\partial b_j},
\end{equation*}
for $k=L+1,$ $L+2,$
\begin{equation*}
E_k:=b_1\Lambda S_k-\sum\limits_{j=1}^L[(2j-\gamma)b_1b_j-b_{j+1}]\frac{\partial S_k}{\partial b_j}.
\end{equation*}
For the expansion of $A_2,$ by Taylor expansion with integral remainder, one gets
\begin{equation*}
A_2=\frac{(d-2)}{y^2}\left[\sum\limits_{i=2}^{L+2}P_i+R_1+R_2\right],
\end{equation*}
where
\begin{align*}
P_i&:=\sum\limits_{j=2}^{L+2}\frac{f^{(j)}(Q)}{j!}\sum_{\substack{|J|_1=j\\|J|_2=i}} c_J \prod_{k=1}^{L}b_k^{i_k}T_k^{i_k}\prod_{k=2}^{L+2}S_k^{j_k},\\
R_1&:=\sum_{j=2}^{L+2}\frac{f^{(j)}(Q)}{j!}\sum_{\substack{|J|_1=j\\|J|_2\ge L+3}}c_J \prod_{k=1}^{L}b_k^{i_k}T_k^{i_k}\prod_{k=2}^{L+2}S_k^{j_k},\\
R_2&:=\frac{\Theta_{b}^{L+3}}{(L+2)!}\int_{0}^{1}(1-\tau)^{L+2}f^{(L+3)}(Q+\tau \Theta_{b})\,\mathrm{d}\tau,
\end{align*}
with $J=(i_1,\cdots,i_L,j_2,\cdots,j_{L+2})\in \mathbb{N}^{2L+1}$ and
\begin{equation*}
|J|_1=\sum_{k=1}^{L}i_k+\sum_{k=2}^{L+2}j_k,\,\,\,|J|_2=\sum_{k=1}^{L}ki_k+\sum_{k=2}^{L+2}kj_k.
\end{equation*}
Note that we take $L$ large so that $R_2=0.$
Thus
\begin{equation}\label{zheng he bi jin jie yu renormalized flow}
\partial_s Q_b-\partial_y^2 Q_b-\frac{(d-3)}{y}\partial_y Q_b+b_1\Lambda Q_b+\frac{(d-2)}{y^2}f(Q_b)=Mod(t)+\Psi_b,
\end{equation}
with
\begin{equation}\label{expression of Psi_b}
\Psi_b:=\sum_{k=1}^{L+1}[\mathscr{L}S_{k+1}+E_k+\frac{(d-2)}{y^2}P_{k+1}]+E_{L+2}+\frac{(d-2)}{y^2}R_1.
\end{equation}\par
Motivated by (\ref{expression of Psi_b}), we define $\{S_k\}_{k=1}^{L+2}$ as
\begin{equation*}
\left\{\begin{aligned}
S_1&=0\\ S_k&=-\mathscr{L}^{-1}F_k
\end{aligned}\right.
\end{equation*}
with
\begin{equation*}
F_k:=E_{k-1}+\frac{(d-2)}{y^2}P_k,\,\,\,2\le k\le L+2.
\end{equation*}
Then we aim at proving (\ref{degree of S_k}) and (\ref{dependence for S_k with respect to b_m}). Claim:
\begin{equation}\label{degree of F_k}
F_k\sim (k-1,k-2,k)\,\,\,\text{and}\,\,\,\frac{\partial F_k}{\partial b_m}=0,\,\,\,2\le k\le m\le L+2.
\end{equation}
We prove it by induction.\par
When $k=2,$ note that by (\ref{asymp: ground state}) and \textnormal{(\romannumeral4)} of Lemma \ref{property of degree}, \begin{equation*}
\frac{f^{(2)}(Q)}{y^2}T_1^2\lesssim \left\{\begin{aligned}
&y^6\ll y^4\,\,\,\text{as}\,\,\,y\rightarrow 0\\
&y^{2-3\gamma}\ll y^{-\gamma}\,\,\,\text{as}\,\,\,y\rightarrow \infty
\end{aligned}\right. .
\end{equation*}
Combined with \textnormal{(\romannumeral5)} of Lemma \ref{property of degree}, we have
\begin{equation*}
F_2=b_1^2\left(\Lambda T_1-(2-\gamma)T_1+c\frac{f^{(2)}(Q)}{y^2}T_1^2\right)\sim (1,0,2).
\end{equation*}\par
Then we show $\le k\Longrightarrow k+1,$ specifically we need to prove
\begin{equation}\label{need to prove on F_k}
F_{k+1}\sim (k,k-1,k+1)\,\,\,\text{and}\,\,\,\frac{\partial F_{k+1}}{b_m}=0,\,\,\,k+1\le m.
\end{equation}
By induction hypothesis, $F_j\sim (j-1,j-2,j)$ and $\frac{\partial F_j}{\partial b_m}=0,$ $j\le m,$ then by \textnormal{(\romannumeral3)} of Lemma \ref{property of degree},
\begin{equation}\label{under induction hypo, esti of S_k}
S_j\sim (j,j-1,j)\,\,\,\text{and}\,\,\,\frac{\partial S_j}{\partial b_m}=0,\,\,\,j\le m,\,\,\,\text{for any}\,\,\,2\le j\le k.
\end{equation}
Let us estimate $E_k$ and $\frac{P_{k+1}}{y^2}$ separately. On $E_k,$ by \textnormal{(\romannumeral5)} of Lemma \ref{property of degree}, (\ref{assump on b_k}), (\ref{under induction hypo, esti of S_k}) and \textnormal{(\romannumeral1)} of Lemma \ref{property of degree}, we have for the components of $E_k,$
\begin{align*}
b_1b_k(\Lambda T_k-(2k-\gamma)T_k)&\sim (k,k-1,k+1),\\
b_1\Lambda S_k&\sim (k,k-1,k+1),\\
\left[(2j-\gamma)b_1-\frac{b_{j+1}}{b_j}\right]\left(b_j\frac{\partial S_k}{\partial b_j}\right)&\sim (k,k-1,k+1).
\end{align*}
Hence $E_k\sim (k,k-1,k+1).$ On $\frac{P_{k+1}}{y^2},$ it is the finite linear combinations of terms like
\begin{equation*}
M_J:=\frac{f^{(j)}(Q)}{y^2}\prod_{m=1}^{L}b_m^{i_m}T_m^{i_m}\prod_{m=2}^{L+2}S_m^{j_m},
\end{equation*}
where $J=(i_1,\cdots,i_L,j_2,\cdots,j_{L+2}),$ $|J|_1=j,$ $|J|_2=k+1,$ $2\le j\le \min \{k+1,L+2\}.$
Note that by (\ref{asymp: ground state}),
\begin{align*}
&\text{when}\,\,\,y\rightarrow 0,\,\,\,f^{(j)}(Q)\lesssim 1.\\
&\text{when}\,\,\,y\rightarrow \infty,\,\,\,f^{(j)}(Q)\lesssim \left\{\begin{aligned}
&y^{-\gamma}\,\,\,\text{for}\,\,\,j\,\,\,\text{even}\\ &1\,\,\,\text{for}\,\,\,j\,\,\,\text{odd}
\end{aligned}\right. .
\end{align*}
Combined with (\ref{assump on b_k}), \textnormal{(\romannumeral4)} of Lemma \ref{property of degree} and (\ref{under induction hypo, esti of S_k}), we get
\begin{align*}
&\text{when}\,\,\,y\rightarrow 0,\,\,\,M_J\lesssim b_1^{k+1}y^{\sum_{m=1}^{L}(2m+2)i_m+\sum_{m=2}^{L+2}(2m+2)j_m-2}\ll b_1^{k+1}y^{2k+2}.\\
&\text{when}\,\,\,y\rightarrow \infty,\,\,\,M_J\lesssim\left\{\begin{aligned}
&b_1^{k+1}y^{2k-\gamma(j+1)-2\sum_{m=2}^{L+2}j_m}\,\,\,\text{for}\,\,\,\text{even}\,\,\,j\\ &b_1^{k+1}y^{2k-\gamma j-2\sum_{m=2}^{L+2}j_m}\,\,\,\text{for}\,\,\,\text{odd}\,\,\,j
\end{aligned}\right. \ll b_1^{k+1}y^{2(k-1)-\gamma}.
\end{align*}
Hence $M_J\sim (k,k-1,k+1)$ and same it is for $F_{k+1}.$ By the definition of $F_{k+1}$ and induction hypothesis, $\frac{\partial F_k}{\partial b_m}=0$ for $2\le k\le m\le L+2$ is easily verified. This closes the proof of (\ref{degree of F_k}). Then again by \textnormal{(\romannumeral3)} of Lemma \ref{property of degree}, (\ref{degree of S_k}) and (\ref{dependence for S_k with respect to b_m}) hold true.\par
Here we omit the details for the proof of (\ref{esti of Psib in the scale B1}) and (\ref{esti of Psib in the scale M}) since one needs only to use the degrees of the components of $\Psi_b$ which are already known. This concludes the proof.
\end{pf}
\subsection{Localized approximation}
\begin{prop}\label{localized approximation}
Consider under the assumptions in Proposition \ref{first approximation}, assume in addition $|(b_1)_s|\lesssim b_1^2.$ Define the localized approximation of (\ref{eq: renormalized flow}) as
\begin{equation}\label{form of approximation}
\widetilde{Q}_{b(s)}(y):=Q(y)+\sum\limits_{k=1}^L b_k \widetilde{T}_k+\sum\limits_{k=2}^{L+2}\widetilde{S}_k\,\,\,\text{with}\,\,\,\widetilde{T}_k:=\chi_{B_1}T_k,\,\,\,\widetilde{S}_k:=\chi_{B_1}S_k.
\end{equation}
Then
\begin{equation}\label{eq: eq of approximation}
\partial_s \widetilde{Q}_b-\partial_y^2 \widetilde{Q}_b-\frac{(d-3)}{y}\partial_y \widetilde{Q}_b+b_1 \Lambda \widetilde{Q}_b+\frac{(d-2)}{y^2}f(\widetilde{Q}_b)=\widetilde{\Psi}_b+\chi_{B_1}Mod(t)
\end{equation}
with $\widetilde{\Psi}_b$ satisfying the following properties.\\
\textnormal{(\romannumeral1)} For all $0\le m\le L-1,$
\begin{equation}\label{1 esti of tilde Psib}
\int |\mathscr{L}^{\hbar+m+1}\widetilde{\Psi}_b|^2+\int \frac{|\mathscr{A}\mathscr{L}^{\hbar+m}\widetilde{\Psi}_b|^2}{1+y^2}+\int \frac{|\mathscr{L}^{\hbar+m}\widetilde{\Psi}_b|^2}{1+y^4}+\int \frac{|\widetilde{\Psi}_b|^2}{1+y^{4(\hbar+m+1)}}\lesssim b_1^{2m+2+2(1-\delta)-C_L \eta},
\end{equation}
and
\begin{equation}\label{2 esti of tilde Psib}
\int |\mathscr{L}^{\hbar+L+1}\widetilde{\Psi}_b|^2+\int \frac{|\mathscr{A}\mathscr{L}^{\hbar+L}\widetilde{\Psi}_b|^2}{1+y^2}+\int \frac{|\mathscr{L}^{\hbar+L}\widetilde{\Psi}_b|^2}{1+y^4}+\int \frac{|\widetilde{\Psi}_b|^2}{1+y^{4(\hbar+L+1)}}\lesssim b_1^{2L+2+2(1-\delta)(1+\eta)}.
\end{equation}
\textnormal{(\romannumeral2)} For all $M\le \frac{B_1}{2}$ and $0\le m\le L,$
\begin{equation}\label{3 esti of tilde Psib}
\int_{y\le 2M} |\mathscr{L}^{\hbar+m+1}\widetilde{\Psi}_b|^2\lesssim M^{C}b_1^{2L+6}.
\end{equation}
\textnormal{(\romannumeral3)} For all $0\le m\le L,$
\begin{equation}\label{4 esti of tilde Psib}
\int_{y\le 2 B_0}|\mathscr{L}^{\hbar+m+1}\widetilde{\Psi}_b|^2+\int_{y\le 2 B_0} \frac{|\widetilde{\Psi}_b|^2}{1+y^{4(\hbar+m+1)}}\lesssim b_1^{2m+4+2(1-\delta)-C_L \eta}.
\end{equation}
\end{prop}
\begin{pf}
\normalfont
Direct computation gives
\begin{align*}
&\partial_s \widetilde{Q}_b-\partial_y^2 \widetilde{Q}_b-\frac{(d-3)}{y}\partial_y \widetilde{Q}_b+b_1 \Lambda \widetilde{Q}_b+\frac{(d-2)}{y^2}f(\widetilde{Q}_b)\\=&\chi_{B_1}\left[\partial_s {Q}_b-\partial_y^2 {Q}_b-\frac{(d-3)}{y}\partial_y {Q}_b+b_1 \Lambda {Q}_b+\frac{(d-2)}{y^2}f({Q}_b)\right]\\&+\Theta_{b}\left[\partial_s \chi_{B_1}-\left(\partial_y^2 \chi_{B_1}+\frac{(d-3)}{y}\partial_y \chi_{B_1}\right)+b_1\Lambda \chi_{B_1}\right]-2\partial_y\chi_{B_1}\partial_y \Theta_{b}+b_1(1-\chi_{B_1})\Lambda Q\\ &+\frac{(d-2)}{y^2}\left[f(\widetilde{Q}_b)-f(Q)-\chi_{B_1}\left(f(Q_b)-f(Q)\right)\right].
\end{align*}
Then by (\ref{eq: eq of first approximation}),
\begin{equation*}
\partial_s \widetilde{Q}_b-\partial_y^2 \widetilde{Q}_b-\frac{(d-3)}{y}\partial_y \widetilde{Q}_b+b_1 \Lambda \widetilde{Q}_b+\frac{(d-2)}{y^2}f(\widetilde{Q}_b)\\=:\chi_{B_1}Mod(t)+\widetilde{\Psi}_b,
\end{equation*}
where
\begin{align*}
\widetilde{\Psi}_b&:=\chi_{B_1}\Psi_b+\widetilde{\Psi}_b^{(1)}+\widetilde{\Psi}_b^{(2)}+\widetilde{\Psi}_b^{(3)},\\ \widetilde{\Psi}_b^{(1)}&:=b_1(1-\chi_{B_1})\Lambda Q,\\ \widetilde{\Psi}_b^{(2)}&:=\frac{(d-2)}{y^2}\left[f(\widetilde{Q}_b)-f(Q)-\chi_{B_1}\left(f(Q_b)-f(Q)\right)\right],\\ \widetilde{\Psi}_b^{(3)}&:=\Theta_{b}\left[\partial_s \chi_{B_1}-\left(\partial_y^2 \chi_{B_1}+\frac{(d-3)}{y}\partial_y \chi_{B_1}\right)+b_1\Lambda \chi_{B_1}\right]-2\partial_y\chi_{B_1}\partial_y \Theta_{b}.
\end{align*}
We only estimate the contribution of $\widetilde{\Psi}_b^{(2)}$ in (\ref{1 esti of tilde Psib})-(\ref{4 esti of tilde Psib}). Note that in $\widetilde{\Psi}_b^{(2)},$ $y$ is supported in $B_1\le y\le 2B_1,$ thus its contribution to (\ref{3 esti of tilde Psib}) and (\ref{4 esti of tilde Psib}) hold trivially. Let us now estimate its contribution to (\ref{1 esti of tilde Psib}) and (\ref{2 esti of tilde Psib}). By Taylor expansion,
\begin{equation}\label{taylor exp of fQb-fQ}
f(Q_b)-f(Q)=f'(Q)\Theta_{b}+\frac{f''(Q)}{2}\Theta_{b}^2+\Theta_{b}^3,\,\,\,\text{with}\,\,\,B_1\le y\le 2B_1.
\end{equation}
Note that for $2\le k\le L,$ by (\ref{assump on b_k}) and (\ref{degree of S_k}) we see that $|S_k|\lesssim b_1^ky^{2(k-1)-\gamma}.$ In comparison, $|b_kT_k|\lesssim b_1^ky^{2k-\gamma},$ which follows from (\ref{assump on b_k}) and \textnormal{(\romannumeral4)} of Lemma \ref{property of degree}. Similarly, $|S_{L+1}|\lesssim b_1^{L+1}y^{2L-\gamma}$ and $|S_{L+2}|\lesssim b_1^{L+2}y^{2(L+1)-\gamma},$ in comparison, $|b_LT_L|\lesssim b_1^Ly^{2L-\gamma}.$ Therefore, the main order term of $\Theta_{b}$ is $\sum\limits_{k=1}^L b_kT_k,$ or say
\begin{equation}\label{esti of Thetab for y sim B1}
|\Theta_b|\lesssim \sum_{k=1}^{L}b_1^ky^{2k-\gamma}1_{B_1\le y\le 2B_1}.
\end{equation}
In particular, since $b_1^ky^{2k-\gamma}1_{B_1\le y\le 2B_1}\lesssim b_1^{\frac{\gamma}{2}+\eta(\frac{\gamma}{2}-k)},$ we have $|\Theta_{b}|\ll 1.$
Substituting (\ref{esti of Thetab for y sim B1}) into (\ref{taylor exp of fQb-fQ}) and making use of (\ref{asymp: ground state}), we get
\begin{equation*}
\left\{\begin{aligned}
|f(Q_b)-f(Q)|&\lesssim |\Theta_{b}|\lesssim \sum_{k=1}^{L}b_1^ky^{2k-\gamma}1_{B_1\le y\le 2B_1}\\ |f(\widetilde{Q}_b)-f(Q)|&\lesssim \chi_{B_1}|\Theta_{b}|\lesssim \sum_{k=1}^{L}b_1^ky^{2k-\gamma}1_{B_1\le y\le 2B_1}
\end{aligned}\right. \Longrightarrow |\widetilde{\Psi}_b^{(2)}|\lesssim \sum_{k=1}^{L}b_1^ky^{2(k-1)-\gamma}1_{B_1\le y\le 2B_1}.
\end{equation*}
We further estimate that $|\widetilde{\Psi}_b^{(2)}|\lesssim \sum\limits_{k=1}^L b_1^kB_1^{2(k-1)}y^{-\gamma}1_{B_1\le y\le 2B_1}=b_1y^{\gamma}\sum\limits_{k=1}^L b_1^{-\eta (k-1)}1_{B_1\le y\le 2B_1},$ then
\begin{align*}
\int |\mathscr{L}^{\hbar+m+1}\widetilde{\Psi}_b^{(2)}|^2&\lesssim b_1^2\sum\limits_{k=1}^L b_1^{-2(k-1)\eta}\int_{B_1\le y\le 2B_1} |y^{-\gamma-2(\hbar+m+1)}|^2y^{d-3}\,\mathrm{d}y\\ &\lesssim b_1^{2m+2+2(1+\eta)(1-\delta)}\sum\limits_{k=1}^L b_1^{(2m-2k+2)\eta},\,\,\,\text{for all}\,\,\,0\le m\le L.
\end{align*}
This concludes the proof.
\end{pf}
\section{Linearization of $b_k$ for $1\le k\le l$}\label{Linearization bk for k from 1 to l}
Denote $\{b_k^e\}_{k=1}^L$ as the solution of
\begin{equation}\label{eq: eq for bke}
\left\{\begin{aligned}
&(b_k^e)_s+(2k-\gamma)b_1^e b_k^e-b_{k+1}^e=0\\
&b_{l+1}^e=b_{l+2}^e=\cdots=b_L^e=0
\end{aligned}\right. ,
\end{equation}
where $b_{L+1}^e:=0$ and \begin{equation}\label{def of l}
\frac{\gamma}{2}<l\ll L
\end{equation} is an integer to be chosen later. One can find a set of solution explicitly in the form $b_k^e=c_k s^{-k},$ with
\begin{equation}\label{coefficient of bke}
\left\{\begin{aligned}
c_1&=\frac{l}{2l-\gamma},\\
c_{k+1}&=-\frac{\gamma(l-k)}{2l-\gamma}c_k,\,\,\,1\le k\le l-1,\\
c_{l+1}&=c_{l+2}=\cdots=c_L=0.
\end{aligned}\right.
\end{equation}
We still denote this set of solution as $\{b_k^e\}_{k=1}^L$ since we will not use any other solutions of (\ref{eq: eq for bke}). Then one can calculate pertubations near $\{b_k^e\}_{k=1}^L$ as the following Lemma, the details of the proof are the same as in Lemma 3.7 in \cite{merle2015type}.
\begin{lem}\label{linearization of bk from 1 to l}
Let $b_k(s)=b_k^e(s)+\frac{\mathcal{U}_k(s)}{s^k},$ $1\le k\le l,$ denote $\mathcal{U}:=(\mathcal{U}_1,\cdots,\mathcal{U}_l).$ Then for $1\le k\le l-1,$
\begin{align}
(b_k)_s+(2k-\gamma)b_1 b_k-b_{k+1}&=\frac{1}{s^{k+1}}[s(\mathcal{U}_k)_s-(A_l\mathcal{U})_k+O(|\mathcal{U}|^2)],\label{linearization for bk form 1 to l-1}\\
(b_l)_s-(2l-\gamma)b_1 b_l&=\frac{1}{s^{l+1}}[s(\mathcal{U}_l)_s-(A_l\mathcal{U})_l+O(|\mathcal{U}|^2)],\label{linearization for bl}
\end{align}
where $A_l=(a_{i,j})_{l\times l}$ with
\begin{equation*}
\left\{\begin{aligned}
a_{1,1}&=\frac{\gamma(l-1)}{2l-\gamma}-(2-\gamma)c_1,\\
a_{i,i}&=\frac{\gamma(l-i)}{2l-\gamma},\,\,\,2\le i\le l,\\
a_{i,i+1}&=1,\,\,\,1\le i\le l-1,\\
a_{i,1}&=-(2i-\gamma)c_i,\,\,\,2\le i\le l,\\
a_{i,j}&=0,\,\,\,\text{otherwise}.
\end{aligned}\right.
\end{equation*}
Moreover, $A_l$ is diagonalizable with
\begin{equation}\label{dig of Al}
A_l=P_l^{-1}D_l P_l,\,\,\,D_l=diag\left\{-1,\frac{2\gamma}{2l-\gamma},\frac{3\gamma}{2l-\gamma},\cdots,\frac{l\gamma}{2l-\gamma}\right\}.
\end{equation}
\end{lem}
\section{Decomposition of the solution and coercivity-determined estimates on the remainder term}\label{Decomposition of the solution and coercivity-determined estimates on the remainder term}
\subsection{Decomposition of the solution}
In this section, we shall use implicit function theorem to show the existence of decomposition
\begin{equation}\label{decomposition}
u(t,r)=(\widetilde{Q}_b+q)\Big(t,\frac{r}{\lambda(t)}\Big)
\end{equation}
with
\begin{equation}\label{orthogonal condition}
\langle q(s,y),\mathscr{L}^i \Phi_M\rangle=0,\,\,\,\text{for}\,\,\,0\le i\le L.
\end{equation}\par
Indeed, let $u_0$ be close to $Q$ in some sense, then this closeness is propagated on a small time interval $[0,t_1).$ Define the map
\begin{equation*}
\mathcal{T}\,\colon\,(t,\lambda,b_1,\cdots,b_L)\longmapsto \Big(\langle u(t)-(\widetilde{Q}_b)_{\lambda},(\mathscr{L}^i \Phi_M)_{\lambda}\rangle\Big)_{i=0,\cdots,L}.
\end{equation*}
Then we choose $u_0$ such that $\mathcal{T}$ maps $(0,\lambda^*,b_1^*,\cdots,b_L^*)$ to the zero vector, for some $\lambda^*$ close to $1$ and $b_i^*$ close to $0$ for all $1\le i\le L.$ Note that by direct computation, the Jacobian of $\mathcal{T}$ at $t=0,$ $\lambda=1,$ $b=0$ is
\begin{equation*}
(-1)^{\frac{(1+L)L}{2}}\langle \chi_M \Lambda Q,\Lambda Q\rangle^{L+1}+\text{small correction},
\end{equation*}
which is nonzero. Then by implicit function theorem, there exists unique functions $\lambda=\lambda(t),$ $b=b(t)$ such that $\left(\big\langle u(t)-(\widetilde{Q}_{b(t)})_{\lambda(t)},(\mathscr{L}^i \Phi_M)_{\lambda(t)}\big\rangle \right)_{i=0,\cdots,L}\equiv 0$ on some time interval $[0,t_1^*).$
\subsection{Coercivity-determined estimates on $q$}
Substituting (\ref{decomposition}) into (\ref{eq: renormalized flow}) and making use of (\ref{orthogonal condition}), we get the equation for the remainder term $q$ as
\begin{equation}\label{eq: eq of q}
\partial_s q-\frac{\lambda_s}{\lambda}\Lambda q+\mathscr{L}q=-\widetilde{\Psi}_b-\widehat{Mod}+\mathcal{H}(q)-\mathcal{N}(q)=:\mathcal{F},
\end{equation}
where
\begin{align}
\widehat{Mod}:&=-\Big(\frac{\lambda_s}{\lambda}+b_1\Big)\Lambda \widetilde{Q}_b+\chi_{B_1}Mod(t),\label{def of widehat Mod}\\
\mathcal{H}(q):&=\frac{(d-2)}{y^2}[f'(Q)-f'(\widetilde{Q}_b)]q,\label{def of Hq}\\
\mathcal{N}(q):&=\frac{(d-2)}{y^2}[f(\widetilde{Q}_b+q)-f(\widetilde{Q}_b)-f'(\widetilde{Q}_b)q].\label{def of Nq}
\end{align}\par
In original variable, denote $v(t,r):=q(s,y),$ then (\ref{eq: eq of q}) becomes
\begin{equation}\label{eq: eq of v}
\partial_t v+\mathscr{L}_{\lambda}v=\frac{1}{\lambda^2}\mathcal{F}_{\lambda}.
\end{equation}
Before we dig more into (\ref{eq: eq of q}) or (\ref{eq: eq of v}) for dynamic-determined estimates, which we shall call modulation estimates and energy estimates later. We make some preparations about estimates on $q$ which are coercivity-determined. For simplicity, denote $\mathscr{E}_{2i}:=\mathscr{E}_{2i}(q)=\int |\mathscr{L}^i q|^2,$ for $i\in \mathbb{N}.$
\begin{lem}\label{coercivity-determined esti on q}
\textnormal{(\romannumeral1)} Near the origin $q$ has the Taylor expansion in the form
\begin{equation}\label{taylor expansion of q near zero}
q=\sum\limits_{i=1}^{\Bbbk} c_i T_{\Bbbk-i}+r_q,
\end{equation}
with bounds
\begin{align}
|c_i|&\lesssim \sqrt{\mathscr{E}_{2\Bbbk}},\label{esti for ci}\\
|\partial_y^i r_q|&\lesssim y^{2\Bbbk+1-\frac{d}{2}-j}|\ln y|^{\Bbbk}\sqrt{\mathscr{E}_{2\Bbbk}},\,\,\,0\le j\le 2\Bbbk-1,\,\,\,y<1.\label{esti for rq}
\end{align}
\textnormal{(\romannumeral2)} Pointwise bound near the origin: for $y<1,$
\begin{align}
|q_{2i}|+|\partial_y^{2i} q|&\lesssim y^{-\frac{d}{2}+3}|\ln y|^{\Bbbk-i}\sqrt{\mathscr{E}_{2\Bbbk}}\,\,\,\text{for}\,\,\,0\le i\le \Bbbk-1,\label{esti for even deri of q near 0}\\
|q_{2i-1}|+|\partial_y^{2i-1} q|&\lesssim y^{-\frac{d}{2}+2}|\ln y|^{\Bbbk-i}\sqrt{\mathscr{E}_{2\Bbbk}}\,\,\,\text{for}\,\,\,1\le i\le \Bbbk.\label{esti for odd deri of q near 0}
\end{align}
\textnormal{(\romannumeral3)} Weighted bounds: for $1\le m\le \Bbbk,$
\begin{equation}\label{weighted bd sum version}
\sum\limits_{i=0}^{2m} \int \frac{|\partial_y^i q|^2}{1+y^{4m-2i}}\lesssim \mathscr{E}_{2m}.
\end{equation}
Moreover, let $(i,j)\in \mathbb{N}\times \mathbb{N}_{+}$ with $2\le i+j\le 2\Bbbk,$ then
\begin{equation}\label{weighted bd}
\int \frac{|\partial_y^i q|^2}{1+y^{2j}}\lesssim \left\{\begin{aligned}
&\mathscr{E}_{2m},\,\,\,\text{for}\,\,\, i+j=2m,\,\,\, 1\le m\le \Bbbk.\\
&\sqrt{\mathscr{E}_{2m}}\sqrt{\mathscr{E}_{2(m+1)}},\,\,\,\text{for}\,\,\, i+j=2m+1,\,\,\,1\le m\le \Bbbk-1.
\end{aligned}\right.
\end{equation}
\textnormal{(\romannumeral4)} Pointwise bound far away: let $(i,j)\in \mathbb{N}\times\mathbb{N}$ with $1\le i+j\le 2\Bbbk-1,$ then for $y\ge 1,$
\begin{equation}\label{pointwise bd away from origin}
\left|\frac{\partial_y^i q}{y^j}\right|^2\lesssim \frac{1}{y^{d-4}}\left\{\begin{aligned}
&\mathscr{E}_{2m},\,\,\,\text{for}\,\,\, i+j+1=2m,\,\,\,1\le m\le \Bbbk.\\
&\sqrt{\mathscr{E}_{2m}}\sqrt{\mathscr{E}_{2(m+1)}},\,\,\,\text{for}\,\,\, i+j=2m,\,\,\,1\le m\le \Bbbk-1.
\end{aligned}\right.
\end{equation}
\end{lem}
\begin{pf}
\normalfont
This can be proved in a similar way as in Appendix B in \cite{ghoul2018stability} despite there are different asymptotics involved, we shall omit the details.
\end{pf}
\section{Description of initial data and bootstrap assumption}\label{Description of initial data and bootstrap assumption}
Assumptions on initial data are the following.
\begin{definition}\label{def of initial data}
Denote $\mathcal{V}:=P_l \mathcal{U}.$ Let $s_0\ge 1,$ Assume initially\\
\textnormal{(\romannumeral1)}
\begin{equation}\label{assump on V2 to Vl initially}
s_0^{\frac{\eta}{2}(1-\delta)}(\mathcal{V}_2(s_0),\cdots,\mathcal{V}_l(s_0))\in \mathcal{B}_{l-1}(0,1).
\end{equation}
\textnormal{(\romannumeral2)}
\begin{equation}\label{assump on V1 and bk from l+1 to L initially}
|s_0^{\frac{\eta}{2}(1-\delta)}\mathcal{V}_1(s_0)|<1,\,\,\, |b_k(s_0)|<s_0^{-k-\frac{5l(2k-\gamma)}{2l-\gamma}}\,\,\,\text{for}\,\,\,l+1\le k
\le L.
\end{equation}
\textnormal{(\romannumeral3)}
\begin{equation}\label{assump on E2k initially}
\sum\limits_{k=\hbar+2}^{\Bbbk} \mathscr{E}_{2k}(s_0)<s_0^{-\frac{10Ll}{2l-\gamma}}.
\end{equation}
\textnormal{(\romannumeral4)} up to a fixed rescaling, we may assume
\begin{equation}\label{assump on lamba initially}
\lambda(s_0)=1.
\end{equation}
\end{definition}
We set up the bootstrap assumption as the following.
\begin{definition}\label{bootstrap assump}
Let $K\gg 1$ denote some lare enough universal constant to be chosen later and $s\ge 1.$ Define $\mathcal{S}_K(s)$ as the set of all $(b_1(s),\cdots,b_L(s),q(s))$ such that\\
\textnormal{(\romannumeral1)}
\begin{equation}\label{bootstrap assump on V2 to Vl}
s^{\frac{\eta}{2}(1-\delta)}(\mathcal{V}_2(s),\cdots,\mathcal{V}_l(s))\in \mathcal{B}_{l-1}(0,1).
\end{equation}
\textnormal{(\romannumeral2)}
\begin{equation}\label{bootstrap assump on V1 and bk from l+1 to L}
|s^{\frac{\eta}{2}(1-\delta)}\mathcal{V}_1(s)|\le 10,\,\,\, |b_k(s)|\le \frac{10}{s^k}\,\,\,\text{for}\,\,\, l+1\le k\le L.
\end{equation}
\textnormal{(\romannumeral3)}
\begin{equation}\label{boootstrap assump on E2Bbbk}
\mathscr{E}_{2\Bbbk}(s)\le Ks^{-[2L+2(1-\delta)(1+\eta)]}.
\end{equation}
\textnormal{(\romannumeral4)}
\begin{equation}\label{boootstrap assump on E2k lower}
\mathscr{E}_{2m}(s)\le\left\{\begin{aligned}
&Ks^{-\frac{l}{2l-\gamma}(4m-d+2)},\,\,\, \hbar+2\le m\le l+\hbar.\\
&Ks^{-[2(m-\hbar-1)+2(1-\delta)-K\eta]},\,\,\, l+\hbar+1\le m\le \Bbbk-1.
\end{aligned}\right.
\end{equation}
\end{definition}
We remark that \\
\textnormal{(\romannumeral1)} If $s_0$ is large enough, initial data $(b(s_0),q(s_0))\in \mathcal{S}_K(s_0).$\\
\textnormal{(\romannumeral2)} If $(b(s),q(s))\in \mathcal{S}_K(s),$ then
\begin{equation*}
b_k(s)\simeq \frac{c_k}{s^k},\,\,\, 1\le k\le l.
\end{equation*}
In particular,
\begin{equation*}
b_1(s)\simeq \frac{c_1}{s},\,\,\, |b_k(s)|\lesssim b_1^k(s),\,\,\, 1\le k\le L.
\end{equation*}
(Thus the assumptions in Proposition \ref{first approximation} are justified)
\section{Modulation estimates and improved bounds}
\begin{prop}\label{modulation estimates}
For $K\gg 1$ some universal large constant, assume there is $s_0(K)\gg 1$ such that $(b(s),q(s))\in \mathcal{S}_K(s)$ on $s\in [s_0,s_1]$ for some $s_1\ge s_0.$ Then for $s\in [s_0,s_1],$
\begin{equation}\label{modu esti for lambda and bk from 1 to L-1}
\sum\limits_{k=1}^{L-1}|(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}|+\left|b_1+\frac{\lambda_s}{\lambda}\right|\lesssim b_1^{L+1+(1-\delta)(1+\eta)},
\end{equation}
and
\begin{equation}\label{modu esti for bL}
|(b_L)_s+(2L-\gamma)b_1b_L|\lesssim \frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+1+(1-\delta)(1+\eta)}.
\end{equation}
\end{prop}
\begin{pf}
\normalfont
The idea is to take inner product of (\ref{eq: eq of q}) and $\mathscr{L}^k \Phi_M$ for $1\le k\le L,$ make use of orthogonal condition (\ref{orthogonal condition}), apply H\"older and coercivity property of $\mathscr{L}.$\par
Denote $D(t):=|b_1+\frac{\lambda_s}{\lambda}|+\sum\limits_{k=1}^L |(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}|.$ Now we take inner product of (\ref{eq: eq of q}) and $\mathscr{L}^L\Phi_M,$ and in view of (\ref{orthogonal condition}) we have
\begin{equation}\label{innner product of eq of q with highest order of L}
\langle \widehat{Mod}(t),\mathscr{L}^L\Phi_M\rangle=-\langle \mathscr{L}^L \widetilde{\Psi}_b,\Phi_M\rangle-\langle \mathscr{L}^{L+1}q,\Phi_M\rangle-\langle -\frac{\lambda_s}{\lambda}\Lambda q-\mathcal{H}(q)+\mathcal{N}(q),\mathscr{L}^L\Phi_M\rangle.
\end{equation}
Then we estimate every term in (\ref{innner product of eq of q with highest order of L}). By (\ref{def of widehat Mod}),
\begin{equation}\label{inner prod of widehatmod and LLPhiM}
\langle \widehat{Mod}(t),\mathscr{L}^L \Phi_M\rangle=O(D(t))\langle \Lambda \widetilde{Q}_b,\mathscr{L}^L \Phi_M\rangle+\langle Mod(t),\mathscr{L}^L \Phi_M\rangle.
\end{equation}
By the degrees of $T_k$ and $S_k$, we see that
\begin{align*}
\langle \Lambda \widetilde{Q}_b,\mathscr{L}^L \Phi_M\rangle&=\sum\limits_{k=1}^L \langle b_k\Lambda T_k,\mathscr{L}^L\Phi_M\rangle+\sum\limits_{k=2}^{L+2}\langle \Lambda S_k,\mathscr{L}^L\Phi_M\rangle\\ &\lesssim \sum\limits_{k=1}^L b_1^kM^{d-2-2\gamma-2(L-k)}+\sum\limits_{k=2}^{L+2} b_1^kM^{d-2\gamma-2-2(L+2-k)}\lesssim b_1M^C,
\end{align*}
where we also have used the fact that $|c_{j,M}|\lesssim M^{2j},$ $0\le j\le L,$ which can be verified by induction on $j.$ By \textnormal{(\romannumeral1)} of Proposition \ref{first approximation} and (\ref{more property of Phi}),
\begin{equation*}
\langle Mod(t),\mathscr{L}^L\Phi_M\rangle=(-1)^L\langle \chi_M\Lambda Q,\Lambda Q\rangle[(b_L)_s+(2L-\gamma)b_1b_L]+O\left(D(t)\sum\limits_{k=1}^L\sum\limits_{j=k+1}^{L+2}\left\langle \frac{\partial S_j}{\partial b_k},\mathscr{L}^L\Phi_M\right\rangle\right),
\end{equation*}
where $\left\langle \frac{\partial S_j}{\partial b_k},\mathscr{L}^L\Phi_M\right\rangle\lesssim b_1^{j-k}M^{d-2\gamma-2(L+2-j)}\lesssim b_1M^C,$ for $k+1\le j\le L+2,$ $1\le k\le L.$ Applying above estimates into (\ref{inner prod of widehatmod and LLPhiM}), we get
\begin{equation}\label{esti of inner prod of widehatmod and LLPhiM}
\langle \widehat{Mod}(t),\mathscr{L}^L \Phi_M\rangle=(-1)^L\langle \chi_M\Lambda Q,\Lambda Q\rangle[(b_L)_s+(2L-\gamma)b_1b_L]+O(M^Cb_1D(t)).
\end{equation}
Using (\ref{3 esti of tilde Psib}) with $m=L-\hbar-1$ and H\"older, we have
\begin{equation}\label{inner prod of LLwidetildePsib and PhiM}
\langle \mathscr{L}^L \widetilde{\Psi}_b,\Phi_M\rangle\lesssim \left(\int_{y\le 2M}|\mathscr{L}^L\widetilde{\Psi}_b|^2\right)^{\frac{1}{2}}\left(\int_{y\le 2M} |\Phi_M|^2\right)^{\frac{1}{2}}\lesssim M^Cb_1^{L+3}.
\end{equation}
For the term $\langle \mathscr{L}^{L+1}q,\Phi_M\rangle,$ by Lemma \ref{coercivity of iterate of L},\begin{equation*}
\mathscr{E}_{2\Bbbk}(q)\gtrsim \int \frac{|\mathscr{L}^{L+1}q|^2}{y^4(1+y^{4(\hbar-1)})}\gtrsim \int_{y\le 2M}\frac{|\mathscr{L}^{L+1}q|^2}{(1+y^{4\hbar})},
\end{equation*}
then by H\"older we see that
\begin{equation}\label{inner prod of LL+1q and PhiM}
|\langle \mathscr{L}^{L+1}q,\Phi_M\rangle|\lesssim M^{2\hbar}\left(\int_{y\le 2M}\frac{|\mathscr{L}^{L+1}q|^2}{(1+y^{4\hbar})}\right)^{\frac{1}{2}}\left(\int_{y\le 2M}|\Phi_M|^2\right)^{\frac{1}{2}}\lesssim M^{2\hbar+\frac{d-2}{2}-\gamma}\sqrt{\mathscr{E}_{2\Bbbk}(q)}.
\end{equation}
By triangle inequality,
\begin{equation*}
\left\langle -\frac{\lambda_s}{\lambda}\Lambda q,\mathscr{L}^L\Phi_M\right\rangle\lesssim (D(t)+b_1)\langle \Lambda q,\mathscr{L}^L\Phi_M\rangle.
\end{equation*}
Note that by H\"older,\begin{equation*}
\langle \Lambda q,\mathscr{L}^L\Phi_M\rangle\lesssim \left\| \frac{\partial_y q}{y^2(1+y^{2(\Bbbk-2)+1})}\right\|_{L^2(y\le 2M)} \| y^3(1+y^{2(\Bbbk-2)+1})\mathscr{L}^L\Phi_M\|_{L^2(y\le 2M)}\lesssim M^C\sqrt{\mathscr{E}_{2\Bbbk}},
\end{equation*}
where we also have used the fact that by Lemma \ref{coercivity of iterate of L} and Lemma \ref{coercivity of L},
\begin{equation}\label{coercivity used in modulation estimates}
\mathscr{E}_{2\Bbbk}(q)\gtrsim \int \frac{|\mathscr{L}q|^2}{y^4{1+y^{4(\Bbbk-2)}}}\gtrsim \int \frac{|\partial_y q|^2}{y^4(1+y^{4(\Bbbk-2)+2})}+\int \frac{q^2}{y^6(1+y^{4(\Bbbk-2)+2})}.
\end{equation}
Therefore,\begin{equation}\label{inner prod of lamdaslambdaLambdaq and LLPhiM}
\left\langle -\frac{\lambda_s}{\lambda}\Lambda q,\mathscr{L}^L\Phi_M\right\rangle\lesssim (D(t)+b_1)M^C\sqrt{\mathscr{E}_{2\Bbbk}}.
\end{equation}
Again by H\"older and (\ref{coercivity used in modulation estimates}), we see that
\begin{align}
\langle -\mathcal{H}(q),\mathscr{L}^L \Phi_M\rangle&\lesssim \langle \frac{q}{y^2}|Q\Theta_{b}+\Theta_{b}+(\Theta_{b})^2|,\mathscr{L}^L \Phi_M\rangle\notag \\ &\lesssim \left\|\frac{q}{y^3(1+y^{2(\Bbbk-2)+1})}\right\|_{L^2}\|y(1+y^{2(\Bbbk-2)+1})(\Theta_{b}+\Theta_{b}Q+\Theta_{b}^2)\mathscr{L}^L\Phi_M\|_{L^2(y\le 2M)}\notag \\&\lesssim M^C b_1\sqrt{\mathscr{E}_{2\Bbbk}}.\label{inner prod of Hq and LLPhiM}
\end{align}
By (\ref{coercivity used in modulation estimates}), we get
\begin{align}
\langle \mathcal{N}(q),\mathscr{L}^L \Phi_M\rangle&\lesssim \langle \frac{q^2}{y^2}(|Q-1|+|\Theta_{b}|+|q|),\mathscr{L}^L \Phi_M\rangle\notag\\ &\lesssim \int \frac{q^2}{y^6(1+y^{4(\Bbbk-2)+2})}\left\|y^4(1+y^{4(\Bbbk-2)+2})(|Q-1|+|\Theta_{b}|+|q|)\mathscr{L}^L \Phi_M\right\|_{L^{\infty}(y\le 2M)}\notag\\ &\lesssim M^C \mathscr{E}_{2\Bbbk}.\label{inner prod of Nq and LLPhiM}
\end{align}
Substituting (\ref{esti of inner prod of widehatmod and LLPhiM}), (\ref{inner prod of LLwidetildePsib and PhiM}), (\ref{inner prod of LL+1q and PhiM}), (\ref{inner prod of lamdaslambdaLambdaq and LLPhiM}), (\ref{inner prod of Hq and LLPhiM}) and (\ref{inner prod of Nq and LLPhiM}) into (\ref{innner product of eq of q with highest order of L}), we have
\begin{equation}\label{in pf modu esti on bL}
|(b_L)_s+(2L-\gamma)b_1b_L|\lesssim \frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+3}+M^C b_1D(t).
\end{equation}\par
Next we take inner product of (\ref{eq: eq of q}) and $\mathscr{L}^k\Phi_M$ for $1\le k\le L-1,$ we get
\begin{equation*}
\langle \widehat{Mod}(t),\mathscr{L}^k \Phi_M\rangle=-\langle \mathscr{L}^k\widetilde{\Psi}_b,\Phi_M\rangle-\langle -\frac{\lambda_s}{\lambda}\Lambda q-\mathcal{H}(q)+\mathcal{N}(q),\mathscr{L}^k \Phi_M\rangle,
\end{equation*}
then similar to the derivation of (\ref{in pf modu esti on bL}), we have
\begin{equation}\label{in pf modu esti on bk}
|(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}|\lesssim b_1^{L+3}+M^Cb_1(\sqrt{\mathscr{E}_{2\Bbbk}}+D(t)).
\end{equation}\par
Next we take inner product of (\ref{eq: eq of q}) and $\Phi_M,$ we get
\begin{equation*}
\langle \widehat{Mod}(t),\Phi_M\rangle=-\langle \widetilde{\Psi}_b,\Phi_M\rangle-\langle -\frac{\lambda_s}{\lambda}\Lambda q-\mathcal{H}(q)+\mathcal{N}(q),\Phi_M\rangle.
\end{equation*}
Note that by above estimates, \begin{equation*}
\langle \widehat{Mod}(t),\Phi_M\rangle=-(\frac{\lambda_s}{\lambda}+b_1)\langle \Lambda Q,\chi_M\Lambda Q\rangle+O(M^Cb_1D(t)).
\end{equation*}
Then again similar to the derivation of (\ref{in pf modu esti on bL}), we have
\begin{equation}\label{in pf mod esti on lambdaslambda+b1}
\left|\frac{\lambda_s}{\lambda}+b_1\right|\lesssim b_1^{L+3}+M^C b_1(\sqrt{\mathscr{E}_{2\Bbbk}}+D(t)).
\end{equation}\par
Now we sum up (\ref{in pf modu esti on bL})-(\ref{in pf mod esti on lambdaslambda+b1}) and apply (\ref{boootstrap assump on E2Bbbk}) to get
\begin{equation}\label{esti of Dt}
D(t)\lesssim \frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+1+(1-\delta)(1+\eta)},
\end{equation}
then substituting (\ref{esti of Dt}) back into (\ref{in pf modu esti on bL})-(\ref{in pf mod esti on lambdaslambda+b1}), we get (\ref{modu esti for lambda and bk from 1 to L-1}) and (\ref{modu esti for bL}), this concludes the proof.
\end{pf}
We remark that by (\ref{modu esti for lambda and bk from 1 to L-1}) and (\ref{assump on b_k}), we see that $|(b_1)_s|\lesssim b_1^2,$ this justifies the additional assumption in Proposition \ref{localized approximation}.
\begin{prop}\label{improved bound for bL prop}
Under the assumptions in Proposition \ref{modulation estimates}, we have for all $s\in [s_0,s_1],$
\begin{equation}\label{improved estimate for b_L}
\left|(b_L)_s+(2L-\gamma)b_1b_L+(-1)^L \partial_s \frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right|\lesssim \frac{1}{B_0^{2\delta}}\Big(\sqrt{\mathscr{E}_{2\Bbbk}}+b_1^{L+1+(1-\delta)-c_L\eta}\Big).
\end{equation}
For certainty, we assume $L\gg 1$ is an even integer.
\end{prop}
\begin{pf}
\normalfont
Now we take inner product of (\ref{eq: eq of q}) and $\mathscr{L}^L(\chi_{B_0}\Lambda Q),$ we get
\begin{align}
&\quad\,\, \langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle\left\{\frac{\mathrm{d}}{\mathrm{d}s}\left[\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle\chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right]-\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle \frac{\mathrm{d}}{\mathrm{d}s}\left[\frac{1}{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}\right]\right\}\notag \\ &=\langle \mathscr{L}^L q,\Lambda Q\partial_s \chi_{B_0}\rangle-\langle \mathscr{L}^{L+1}q,\chi_{B_0}\Lambda Q\rangle+\frac{\lambda_s}{\lambda}\langle \mathscr{L}^L\Lambda q,\chi_{B_0}\Lambda Q\rangle\notag \\ &\quad\,\, -\langle \mathscr{L}^L\widetilde{\Psi}_b,\chi_{B_0}\Lambda Q\rangle-\langle \mathscr{L}^L\widehat{Mod}(t),\chi_{B_0}\Lambda Q\rangle+\langle \mathscr{L}^L(\mathcal{H}(q)-\mathcal{N}(q)),\chi_{B_0}\Lambda Q\rangle.\label{eq of inner prod of eq of q and LLchiB0LambdaQ}
\end{align}
By H\"older and Lemma \ref{coercivity of iterate of L},\begin{equation*}
|\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle|\lesssim \left(\int_{y\le 2B_0} \frac{|\mathscr{L}^L q|^2}{y^4+y^{4\hbar+4}}\right)^{\frac{1}{2}}\|\chi_{B_0}\Lambda Q\|_{L^2}B_0^{2\hbar+2}\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar+2}\sqrt{\mathscr{E}_{2\Bbbk}},
\end{equation*}
where we also used the fact that $B_0^{d-2-2\gamma}\lesssim \langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle\lesssim B_0^{d-2-2\gamma}.$ Note also that $|\partial_s \chi_{B_0}|=\left|\frac{y\partial_s b_1}{2B_0b_1}\chi'(\frac{y}{B_0})\right|\lesssim 1_{B_0\le y\le 2B_0}b_1,$ then
\begin{align}
\left|\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle \frac{\mathrm{d}}{\mathrm{d}s}\left[\frac{1}{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}\right]\right|&=\left|-\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}^2}\langle \Lambda Q,\Lambda Q\partial_s \chi_{B_0}\rangle\right| \notag \\ &\lesssim \frac{B_0^{\frac{d-2}{2}-\gamma+2\hbar+2}\sqrt{\mathscr{E}_{2\Bbbk}}}{B_0^{2(d-2)-4\gamma}} b_1\int_{B_0\le y\le 2B_0} y^{-2\gamma+d-3}\,\mathrm{d}y \notag\\&\lesssim \frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{B_0^{2\delta}}.\label{esti for the lhs in the pf of improved bd}
\end{align}
Again by H\"older and Lemma \ref{coercivity of iterate of L}, we get
\begin{align}
|\langle \mathscr{L}^L q,\Lambda Q\partial_s \chi_{B_0}\rangle|&\lesssim \left(\int \frac{|\mathscr{L}^L q|^2}{y^4+y^{4\hbar+4}}\right)^{\frac{1}{2}}\left(\int_{B_0\le y\le 2B_0} (y^4+y^{4\hbar+4})|\Lambda Q|^2\right)^{\frac{1}{2}}\left|\frac{\partial_s b_1}{b_1}\right|\notag\\ &\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar}\sqrt{\mathscr{E}_{2\Bbbk}},\label{esti of inner product of LLq and LambdaQpartialschiB0}\end{align}
and
\begin{align}
|\langle \mathscr{L}^{L+1}q,\chi_{B_0}\Lambda Q\rangle|&\lesssim \left(\int \frac{|\mathscr{L}^{L+1}q|^2}{y^4+y^{4\hbar}}\right)^{\frac{1}{2}}\left(\int_{y\le 2 B_0} (y^4+y^{4\hbar})|\chi_{B_0}\Lambda Q|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar}\sqrt{\mathscr{E}_{2\Bbbk}}.\label{esti of inner product of LL+1q and chiB0LambdaQ}
\end{align}
By H\"older, (\ref{modu esti for lambda and bk from 1 to L-1}) and (\ref{coercivity used in modulation estimates}), we see that
\begin{align}
\left|\frac{\lambda_s}{\lambda}\langle \mathscr{L}^L\Lambda q,\chi_{B_0}\Lambda Q\rangle\right|&\lesssim b_1\left(\int \frac{|\partial_y q|^2}{y^4+y^{4(L+\hbar)+2}}\right)^{\frac{1}{2}}\left(\int (y^6+y^{4(L+\hbar)+4})|\mathscr{L}^L(\chi_{B_0}\Lambda Q)|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar}\sqrt{\mathscr{E}_{2\Bbbk}}.\label{esti of inner product of lambdaslambdaLLLambdaq and chiB0LambdaQ}
\end{align}
By H\"older and (\ref{4 esti of tilde Psib}) with $m=L,$ we have
\begin{align}
|\langle \mathscr{L}^L\widetilde{\Psi}_b,\chi_{B_0}\Lambda Q\rangle|&\lesssim \left(\int_{y\le 2 B_0} \frac{|\widetilde{\Psi}_b|^2}{1+y^{4(\hbar+L+1)}}\right)^{\frac{1}{2}}\left(\int (1+y^{4(\hbar+L+1)})|\mathscr{L}^L (\chi_{B_0}\Lambda Q)|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar}b_1^{L+1+(1-\delta)-c_L \eta}.\label{esti of inner product of LLwidetildePsib and chiB0LambdaQ}
\end{align}
Next we estimate the term $\langle \mathscr{L}^L(\mathcal{H}(q)-\mathcal{N}(q)),\chi_{B_0}\Lambda Q\rangle.$ Considering under the condition $y\le 2B_0,$ we get $|f'(\widetilde{Q}_b)-f'(Q)|\lesssim |Q\Theta_{b}+\Theta_{b}+\Theta_{b}^2|,$ where $|\Theta_{b}|\lesssim \sum\limits_{k=1}^L b_1^ky^{2k-\gamma}+\sum\limits_{k=2}^{L+2}b_1^ky^{2(k-1)-\gamma}\lesssim b_1^{\frac{\gamma}{2}}.$ Then $|f'(\widetilde{Q}_b)-f'(Q)|\lesssim b_1^{\frac{\gamma}{2}}\ll 1,$ and hence \begin{equation}\label{rougn bd on Hq when yle2B0}
|\mathcal{H}(q)|\lesssim \frac{|q|}{y^2}.\,\,\,\text{(a rough bound is enough here)}
\end{equation}
Note that by \textnormal{(\romannumeral4)} of Lemma \ref{coercivity-determined esti on q}, when $1\le y\le 2B_0,$ $|q|\lesssim y^{2L+2-2\delta-\gamma}\mathscr{E}_{2\Bbbk}^{\frac{1}{2}}\lesssim b_1^{\frac{\gamma}{2}+(1-\delta)\eta}\ll 1,$ then
\begin{align*}
|f(\widetilde{Q}_b+q)-f(\widetilde{Q}_b)-f'(\widetilde{Q}_b)q|&=\left|q^2[3(\widetilde{Q}_b-1)+q]\right|\\ &\lesssim q^2\left(|Q-1|+|\Theta_{b}|+|q|\right)\lesssim |q|^2.
\end{align*}
Therefore, \begin{equation}\label{rough bd on Nq when yle2B0}
|\mathcal{N}(q)|\lesssim \frac{|q|^2}{y^2}\lesssim \frac{|q|}{y^2},\,\,\,\text{for}\,\,\,y\le 2B_0.\,\,\,\text{(a rough bound is enough here)}
\end{equation}
By (\ref{rougn bd on Hq when yle2B0}), (\ref{rough bd on Nq when yle2B0}), H\"older and (\ref{coercivity used in modulation estimates}), we see that
\begin{align}
|\langle \mathscr{L}^L(\mathcal{H}(q)-\mathcal{N}(q)),\chi_{B_0}\Lambda Q\rangle|&\lesssim \int \frac{|q|}{y^2}\left|\mathscr{L}^L(\chi_{B_0}\Lambda Q)\right|\notag\\ &\lesssim \left(\int \frac{|q|^2}{y^6+y^{4\Bbbk}}\right)^{\frac{1}{2}}\left(\int (y^2+y^{4\Bbbk-4})\left|\mathscr{L}^L(\chi_{B_0}\Lambda Q)\right|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim B_0^{\frac{d-2}{2}-\gamma+2\hbar}\sqrt{\mathscr{E}_{2\Bbbk}}.\label{esti of inner product of LLHq-Nq and chiB0LambdaQ}
\end{align}
It remains to estimate $\langle \mathscr{L}^L\widehat{Mod}(t),\chi_{B_0}\Lambda Q\rangle,$ direct computation gives
\begin{equation}\label{expression of inner product of LLwidehatMod and chiB0LambdaQ}
\begin{aligned}
\langle \mathscr{L}^L\widehat{Mod}(t),\chi_{B_0}\Lambda Q\rangle&=(-1)^L\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle[(b_L)_s+(2L-\gamma)b_1b_L]\\ &\quad\,-\left(\frac{\lambda_s}{\lambda}+b_1\right)\langle \Lambda \Theta_{b},\mathscr{L}^L(\chi_{B_0}\Lambda Q)\rangle\\ &\quad\,+\left\langle \sum_{k=1}^L[(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}]\sum\limits_{j=k+1}^{L+2}\frac{\partial S_j}{\partial b_k},\mathscr{L}^L(\chi_{B_0}\Lambda Q)\right\rangle.
\end{aligned}
\end{equation}
When $y\le 2B_0,$ we have $b_1y^2\lesssim 1,$ then
\begin{equation*}
\left\{\begin{aligned}
\sum_{k=1}^L|b_k\Lambda T_k|&\lesssim \sum_{k=1}^Lb_1^ky^{2k-\gamma}\lesssim b_1y^{2-\gamma}\\ \sum_{k=2}^{L+2}|\Lambda S_k|&\lesssim \sum_{k=2}^{L+2}b_1^ky^{2(k-1)-\gamma}\lesssim b_1^2y^{2-\gamma}
\end{aligned}\right. \Longrightarrow |\Lambda \Theta_{b}|\lesssim b_1y^{2-\gamma},
\end{equation*}
and\begin{equation*}
\sum\limits_{j=k+1}^{L+2}\frac{\partial S_j}{\partial b_k}\lesssim \sum\limits_{j=k+1}^{L+2}b_1^{j-k}y^{2(j-1)-\gamma}\lesssim b_1y^{2k-\gamma}.
\end{equation*}
Then by Proposition \ref{modulation estimates},
\begin{align}
\left|\frac{\lambda_s}{\lambda}+b_1\right|\left|\langle \Lambda \Theta_{b},\mathscr{L}^L(\chi_{B_0}\Lambda Q)\rangle\right|&\lesssim b_1^{L+1+(1-\delta)(1+\eta)}\int_{0}^{2B_0}b_1y^{2-\gamma}\cdot y^{-\gamma-2L+d-3}\,\mathrm{d}y\notag\\ &\lesssim b_1^{2L+1+(1-\delta)(1+\eta)}B_0^{d-2-2\gamma},\label{esti of inner product of lambdaslambdab1LambdaThetab and LLchiB0LambdaQ}
\end{align}
and
\begin{align}
&\quad\,\sum_{k=1}^L\left|(b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}\right|\left|\left\langle\sum\limits_{j=k+1}^{L+2}\frac{\partial S_j}{\partial b_k},\mathscr{L}^L(\chi_{B_0}\Lambda Q)\right\rangle\right|\notag\\&\lesssim \left(\frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+1+(1-\delta)(1+\eta)}\right)\int_{0}^{2B_0}b_1y^{2L-\gamma}\cdot y^{-\gamma-2L+d-3}\,\mathrm{d}y\notag\\ &\lesssim \left(\frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+1+(1-\delta)(1+\eta)}\right)b_1B_0^{d-2-2\gamma}.\label{esti of inner product of bks+2k-gammab1bk-bk+1partialSjpartialbk and LLchiB0LambdaQ}
\end{align}
Now substituting (\ref{esti of inner product of lambdaslambdab1LambdaThetab and LLchiB0LambdaQ}) and (\ref{esti of inner product of bks+2k-gammab1bk-bk+1partialSjpartialbk and LLchiB0LambdaQ}) into (\ref{expression of inner product of LLwidehatMod and chiB0LambdaQ}), then gathering the estimates (\ref{esti for the lhs in the pf of improved bd})-(\ref{esti of inner product of LLwidetildePsib and chiB0LambdaQ}), (\ref{esti of inner product of LLHq-Nq and chiB0LambdaQ}) and (\ref{expression of inner product of LLwidehatMod and chiB0LambdaQ}) into (\ref{eq of inner prod of eq of q and LLchiB0LambdaQ}) and dividing $(-1)^L\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle,$ we would get (\ref{improved estimate for b_L}), this concludes the proof.
\end{pf}
\section{Energy estimates}
\begin{prop}\label{energy estimates}
Under the assumptions in Proposition \ref{modulation estimates}, we have monotonicity formulas as the following.
\begin{equation}\label{monotonicity for E2Bbbk}
\frac{\mathrm{d}}{\mathrm{d}t}\left\{\frac{\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+2}}[1+O(b_1^{\eta(1-\delta)})]\right\}\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+4}}\left[b_1^{L+(1-\delta)(1+\eta)}\sqrt{\mathscr{E}_{2\Bbbk}}+\frac{\mathscr{E}_{2\Bbbk}}{M^{2\delta}}+b_1^{2L+2(1-\delta)(1+\eta)}\right],
\end{equation}
for $\hbar+2\le m\le \Bbbk-1,$
\begin{equation}\label{monotoncity for E2m}
\frac{\mathrm{d}}{\mathrm{d}t} \left\{\frac{\mathscr{E}_{2m}}{\lambda^{4m-d+2}}[1+O(b_1)]\right\}\lesssim \frac{b_1}{\lambda^{4m-d+4}}\left[b_1^{m-\hbar-1+(1-\delta)-C\eta}\sqrt{\mathscr{E}_{2m}}+b_1^{2(m-\hbar-1)+2(1-\delta)-C\eta}\right].
\end{equation}
\end{prop}
\begin{pf}
\normalfont
Some aspects of this proof are parallel to the proof of Proposition 4.4 in \cite{ghoul2018stability}, thus we shall omit some details. For simplicity, we shall only prove (\ref{monotonicity for E2Bbbk}). For convenience, we abuse notation by abbreviating $v_k^*$ as $v_k$ for $k\in \mathbb{N}.$\par
Firstly, we set up the energy identity. Acting $\mathscr{L}_{\lambda}^{\Bbbk-1}$ on equation (\ref{eq: eq of v}), we have
\begin{equation}\label{eq:act LlambdaBbbk-1 on eq of v}
\partial_t v_{2\Bbbk-2}+\mathscr{L}_{\lambda}v_{2\Bbbk-2}=[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v+\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right).
\end{equation}
Then acting $\mathscr{A}_{\lambda}$ on (\ref{eq:act LlambdaBbbk-1 on eq of v}), we get
\begin{equation}\label{eq:act AlambdaLlambdaBbbk-1 on eq of v}
\partial_t v_{2\Bbbk-1}+\widetilde{\mathscr{L}}v_{2\Bbbk-1}=\frac{\partial_t V_{\lambda}}{r}v_{2\Bbbk-2}+\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v+\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right).
\end{equation}
Making use of (\ref{eq:act LlambdaBbbk-1 on eq of v}) and (\ref{eq:act AlambdaLlambdaBbbk-1 on eq of v}), we still have the energy identity as
\begin{align}
&\quad\,\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\{\frac{\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+2}}+2\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-1}v_{2\Bbbk-2}\right\}\notag\\ &=-\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2-\left(\frac{\lambda_s}{\lambda}+b_1\right)\int \frac{(\Lambda \widetilde{Z})_{\lambda}}{2\lambda^2 r^2}v_{2\Bbbk-1}^2-\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}\notag\\ &\quad\,\, +\int \frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}\right)v_{2\Bbbk-1}v_{2\Bbbk-2}+\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-1}\left([\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v+\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right)\right)\notag\\ &\quad\,\, +\int \left(\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}+\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\right)\left(\frac{\partial_t V_{\lambda}}{r}v_{2\Bbbk-2}+\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v+\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right)\right).\label{energy identity}
\end{align}\par
Now we estimate terms in (\ref{energy identity}). Note that by Lemma \ref{coercivity of iterate of L}, we get
\begin{equation}\label{coercivity used in energy estimate}
\mathscr{E}_{2\Bbbk}(q)\gtrsim \int \frac{|q_{2\Bbbk-1}|^2}{y^2}+\sum\limits_{j=0}^{\Bbbk-1}\int \frac{|q_{2j}|^2}{y^4(1+y^{4(\Bbbk-1-j)})}+\sum\limits_{j=0}^{\Bbbk-2}\int \frac{|q_{2j+1}|^2}{y^6(1+y^{4(\Bbbk-2-j)})}.
\end{equation}\par
On the second term on the LHS of (\ref{energy identity}). Note that by (\ref{def: def of V}),
\begin{equation*}
|\Lambda V(y)|\lesssim \left\{\begin{aligned}
&y^2\,\,\,\text{as}\,\,\,y\rightarrow 0\\ &y^{-2\gamma}\lesssim y^{-2}\,\,\,\text{as}\,\,\,y\rightarrow \infty
\end{aligned}\right. \Longrightarrow |\Lambda V(y)|\lesssim \frac{y^2}{1+y^4}.
\end{equation*}
Then by H\"older and (\ref{coercivity used in energy estimate}), we see that
\begin{align}
\left|\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-1}v_{2\Bbbk-2}\right|&=\frac{1}{\lambda^{4\Bbbk-d+2}}\left|\int \frac{b_1\Lambda V}{y}q_{2\Bbbk-1}q_{2\Bbbk-2}\right|\notag\\ &\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+2}}\left(\int \frac{|q_{2\Bbbk-1}|^2}{y^2} \right)^{\frac{1}{2}}\left(\int \frac{|q_{2\Bbbk-2}|^2}{1+y^4}\right)^{\frac{1}{2}}\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+2}}\mathscr{E}_{2\Bbbk}(q).\label{in energy identity lhs esti}
\end{align}\par
Note that by (\ref{def: def of V}),\begin{equation*}
\Lambda \widetilde{Z}=2V\Lambda V+(d-2)\Lambda V-\Lambda^2 V\lesssim \left\{\begin{aligned}
&y^2\,\,\,\text{as}\,\,\,y\rightarrow 0\\ &y^{-2\gamma}\,\,\,\text{as}\,\,\,y\rightarrow \infty
\end{aligned}\right. \lesssim \frac{y^2}{1+y^4}.
\end{equation*}
Then by (\ref{modu esti for lambda and bk from 1 to L-1}) and (\ref{coercivity used in energy estimate}), we have
\begin{align}
\left|\left(\frac{\lambda_s}{\lambda}+b_1\right)\int \frac{(\Lambda \widetilde{Z})_{\lambda}}{\lambda^2 r^2}v_{2\Bbbk-1}^2\right|&=\left|\left(\frac{\lambda_s}{\lambda}+b_1\right)\frac{1}{\lambda^{4\Bbbk-d+4}}\int \frac{\Lambda \widetilde{Z}}{y^2}q_{2\Bbbk-1}^2\right|\notag\\ &\lesssim \frac{b_1^{L+1+(1-\delta)(1+\eta)}}{\lambda^{4\Bbbk-d+4}}\int \frac{q_{2\Bbbk-1}^2}{y^2}\lesssim \frac{b_1^{L+1+(1-\delta)(1+\eta)}}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}(q).\label{in energy identity rhs 1 esti}
\end{align}\par
Again by (\ref{def: def of V}) and (\ref{coercivity used in energy estimate}), we get
\begin{align}
\left|\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}\right|&\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+\int \frac{b_1^2|(\Lambda V)_{\lambda}|^2}{\lambda^4r^2}v_{2\Bbbk-2}^2\notag\\ &=\frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+\frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\int \frac{|\Lambda V(y)|^2}{y^2}q_{2\Bbbk-2}^2\notag\\ &\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+\frac{Cb_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}.\label{in energy identity rhs 2 esti}
\end{align}\par
Note that by (\ref{modu esti for lambda and bk from 1 to L-1}), we see that
\begin{align*}
\left|\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2}\right)\right|&=\left|\frac{(b_1)_s}{\lambda}(\Lambda V)_{\lambda}-\frac{b_1}{\lambda^4}\frac{\lambda_s}{\lambda}(\Lambda^2 V)_{\lambda}-\frac{2b_1(\Lambda V)_{\lambda}}{\lambda^4}\frac{\lambda_s}{\lambda}\right|\\ &\lesssim \frac{b_1^2}{\lambda^4}\left(|(\Lambda V)_{\lambda}|+|(\Lambda^2 V)_{\lambda}|\right).
\end{align*}
Then again by (\ref{def: def of V}), H\"older and (\ref{coercivity used in energy estimate}), we have
\begin{align}
\left|\int \frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}\right)v_{2\Bbbk-1}v_{2\Bbbk-2}\right|&\lesssim \frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\int \frac{|\Lambda V|+|\Lambda^2 V|}{y}q_{2\Bbbk-1}q_{2\Bbbk-2}\notag\\ &\lesssim \frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\left(\frac{|q_{2\Bbbk-1}|^2}{y^2}\right)^{\frac{1}{2}}\left(\frac{|q_{2\Bbbk-2}|^2}{y^4}\right)^{\frac{1}{2}}\lesssim \frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}.\label{in energy identity rhs 3 esti}
\end{align}\par
Note that by (\ref{modu esti for lambda and bk from 1 to L-1}),
\begin{equation*}
\partial_t V_{\lambda}=-\frac{\lambda_s}{\lambda}\frac{1}{\lambda^2}(\Lambda V)_{\lambda}\Longrightarrow \left|\frac{\partial_t V_{\lambda}}{r}\right|\lesssim \frac{b_1|(\Lambda V)_{\lambda}|}{\lambda^2r}.
\end{equation*}
Then again using (\ref{def: def of V}) and (\ref{coercivity used in energy estimate}), we see that
\begin{align}
&\quad\,\,\left|\int \left(\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}+\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\right)\frac{\partial_t V_{\lambda}}{r}v_{2\Bbbk-2}\right|\notag\\&\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+C\int \left(\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2r}\right)^2v_{2\Bbbk-2}^2\notag\\ &=\frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+C\frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\int \frac{|\Lambda V|^2}{y^2}q_{2\Bbbk-2}^2\notag\\ &\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+C\frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\int \frac{q_{2\Bbbk-2}^2}{y^4}\notag\\ &\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+C\frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}(q).\label{in energy identity rhs 4 esti}
\end{align}\par
Similar to the estimate of (\ref{in energy identity rhs 4 esti}), we have
\begin{align}
&\quad\,\,\left|\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-1}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v\right|+\left|\int \left(\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}+\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\right)\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v\right|\notag\\ &\le C\Bigg[\int \frac{b_1^2}{\lambda^2}\frac{v_{2\Bbbk-1}^2}{r^2}+\int \frac{|[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2|(\Lambda V)_{\lambda}|^2}{\lambda^2}+\int |\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2\notag\\ &\quad\,\,+\int \left(\frac{b_1(\Lambda V)_{\lambda}}{\lambda^2r}\right)^2v_{2\Bbbk-2}^2\Bigg]+\frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2\notag\\ &\le \frac{1}{4}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+C\left(\frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}+\int \frac{|[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2}{\lambda^2(1+y^4)}+\int |\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2\right).\label{in energy identity rhs 5 esti}
\end{align}
In (\ref{in energy identity rhs 5 esti}), we claim that
\begin{equation}\label{commutator related estimate}
\int \frac{|[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2}{\lambda^2(1+y^2)}+\int |\mathscr{A}_{\lambda}[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2\lesssim \frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}.
\end{equation}
We changed $y^4$ into $y^2$ so that the two integrals in (\ref{commutator related estimate}) are of the same order. Let us prove this claim. Note that
\begin{equation}\label{reduction of the commutator}
[\partial_t,\mathscr{L}_{\lambda}^{k-1}]g=\sum_{m=0}^{k-2}\mathscr{L}_{\lambda}^m[\partial_t,\mathscr{L}_{\lambda}]\mathscr{L}_{\lambda}^{k-2-m}g,
\end{equation}
for any $k\ge 2$ and any smooth radial function $g.$ One can prove it by induction on $k,$ we shall omit the details. Then
\begin{equation*}
[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v=\sum_{m=0}^{\Bbbk-2}\mathscr{L}_{\lambda}^m\left(\frac{\partial_t Z_{\lambda}}{r^2}\mathscr{L}_{\lambda}^{\Bbbk-2-m}v\right),\,\,\,\text{with}\,\,\,\frac{\partial_t Z_{\lambda}}{r^2}=-\frac{\lambda_s}{\lambda}\frac{(\Lambda Z)_{\lambda}}{\lambda^2 r^2}.
\end{equation*}
By (\ref{modu esti for lambda and bk from 1 to L-1}), we deduce that
\begin{equation*}
\int \frac{|[\partial_t,\mathscr{L}_{\lambda}^{\Bbbk-1}]v|^2}{\lambda^2(1+y^2)}\lesssim \frac{b_1^2}{\lambda^{4\Bbbk-d+4}}\sum_{m=0}^{\Bbbk-2}\int \frac{1}{1+y^2}\left|\mathscr{L}^m\left(\frac{\Lambda Z}{y^2}\mathscr{L}^{\Bbbk-2-m}q\right)\right|^2.
\end{equation*}
When $m=0,$ note that
\begin{equation*}
\left|\frac{\Lambda Z}{y^2}\right|=\left|\frac{(d-2)f''(Q)\Lambda Q}{y^2}\right|\lesssim \frac{1}{y^{2\gamma+2}}\lesssim \frac{1}{1+y^4}.
\end{equation*}
Then by (\ref{coercivity used in energy estimate}),
\begin{equation*}
\int \frac{1}{1+y^2}\left|\frac{\Lambda Z}{y^2}\mathscr{L}^{\Bbbk-2}q\right|^2\lesssim \int \frac{q_{2\Bbbk-4}^2}{1+y^{10}}\lesssim \mathscr{E}_{2\Bbbk}(q).
\end{equation*}
When $1\le m\le \Bbbk-2,$ by (\ref{leibniz for iterate of L}) with $\phi=\frac{\Lambda Z}{y^2},$ $g=\mathscr{L}^{\Bbbk-2-m}q,$ we have
\begin{align*}
\mathscr{L}^m\left(\frac{\Lambda Z}{y^2}\mathscr{L}^{\Bbbk-2-m}q\right)&=\sum\limits_{i=0}^m \mathscr{L}^{\Bbbk-2-(m-i)}q\left(\frac{\Lambda Z}{y^2}\right)_{2m,2i}+\sum\limits_{i=0}^{m-1}\mathscr{A}\mathscr{L}^{\Bbbk-2-(m-i)}q\left(\frac{\Lambda Z}{y^2}\right)_{2m,2i+1}\\&\lesssim \sum\limits_{i=0}^m \frac{q_{2(\Bbbk-2-(m-i))}}{1+y^{2\gamma+2+2(m-i)}}+\sum\limits_{i=0}^{m-1}\frac{q_{2(\Bbbk-2-(m-i))+1}}{1+y^{2\gamma+2+2(m-i)-1}},
\end{align*}
where we also used the fact that \begin{equation*}\left|\left(\frac{\Lambda Z}{y^2}\right)_{2m,i}\right|\lesssim \frac{1}{y^{2\gamma+2+2m-i}},\,\,\, \text{for}\,\,\, 0\le i\le 2m.\end{equation*}
Then by (\ref{coercivity used in energy estimate}), we see that
\begin{align*}
\int \frac{1}{1+y^2}\left|\mathscr{L}^m\left(\frac{\Lambda Z}{y^2}\mathscr{L}^{\Bbbk-2-m}q\right)\right|^2&\lesssim \sum\limits_{i=0}^m \int \frac{q_{2(\Bbbk-2-(m-i))}^2}{1+y^{4\gamma+6+4(m-i)}}+\sum\limits_{i=0}^{m-1} \int \frac{q_{2(\Bbbk-2-(m-i))+1}^2}{1+y^{4\gamma+4+4(m-i)}}\\ &\lesssim \sum\limits_{i=0}^m \int \frac{q_{2(\Bbbk-2-(m-i))}^2}{y^4(1+y^{4(1+m-i)})}+\sum\limits_{i=0}^{m-1}\int \frac{q_{2(\Bbbk-2-(m-i))+1}^2}{y^6(1+y^{4(m-i)})}\lesssim \mathscr{E}_{2\Bbbk}(q).
\end{align*}
This concludes the proof of (\ref{commutator related estimate}).\par
Again by (\ref{def: def of V}), H\"older and (\ref{coercivity used in energy estimate}), we get
\begin{align}
\left|\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-1}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right)\right|&=\frac{b_1}{\lambda^{4\Bbbk-d+4}}\int \frac{\Lambda V}{y}q_{2\Bbbk-1}\mathscr{L}^{\Bbbk-1}\mathcal{F}\notag\\ &\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+4}}\left(\int \frac{q_{2\Bbbk-1}^2}{y^2}\right)^{\frac{1}{2}}\left(\int \frac{|\mathscr{L}^{\Bbbk-1}\mathcal{F}|^2}{1+y^4}\right)^{\frac{1}{2}}\notag\\ &\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+4}}\sqrt{\mathscr{E}_{2\Bbbk}}\left(\int \frac{|\mathscr{L}^{\Bbbk-1}\mathcal{F}|^2}{1+y^4}\right)^{\frac{1}{2}}.\label{in energy identity rhs 6 esti}
\end{align}\par
Similar to the estimate (\ref{in energy identity rhs 6 esti}), we have
\begin{equation}\label{in energy identity rhs 7 esti}
\left|\int \frac{b_1(\Lambda V)_{\lambda}}{\lambda^2 r}v_{2\Bbbk-2}\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right)\right|\lesssim \frac{b_1}{\lambda^{4\Bbbk-d+4}}\sqrt{\mathscr{E}_{2\Bbbk}}\left(\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}|^2}{1+y^2}\right)^{\frac{1}{2}}.
\end{equation}\par
On the term $\int \widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right).$ Denote\begin{equation*}
\xi_L:=\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\widetilde{T}_L,
\end{equation*}
we have the decomposition
\begin{equation*}
\mathcal{F}=:\partial_s \xi_L+\mathcal{F}_0+\mathcal{F}_1,\,\,\,\text{where}\,\,\,\mathcal{F}_0:=-\widetilde{\Psi}_b-\widehat{Mod}-\partial_s \xi_L\,\,\,\text{and}\,\,\,\mathcal{F}_1:=\mathcal{H}(q)-\mathcal{N}(q).
\end{equation*}
Then by H\"older, we get
\begin{align}
&\quad\,\,\int \widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}\mathscr{A}_{\lambda}\mathscr{L}_{\lambda}^{\Bbbk-1}\left(\frac{1}{\lambda^2}\mathcal{F}_{\lambda}\right)\notag\\ &=\frac{1}{\lambda^{4\Bbbk-d+4}}\int \widetilde{\mathscr{L}}q_{2\Bbbk-1}\mathscr{A}\mathscr{L}^{\Bbbk-1}(\partial_s \xi_L+\mathcal{F}_0+\mathcal{F}_1)\notag\\ &=\frac{1}{\lambda^{4\Bbbk-d+4}}\left(\int \mathscr{A}^*q_{2\Bbbk-1}\mathscr{L}^{\Bbbk}(\partial_s \xi_L)+\int \mathscr{A}^*q_{2\Bbbk-1}\mathscr{L}^{\Bbbk}\mathcal{F}_0+\int \widetilde{\mathscr{L}}q_{2\Bbbk-1}\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}_1\right)\notag\\ &\le \frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s \xi_L)+\frac{1}{\lambda^{4\Bbbk-d+4}}\left(\int |\mathscr{L}^{\Bbbk}q|^2\right)^{\frac{1}{2}}\left(\int |\mathscr{L}^{\Bbbk}\mathcal{F}_0|^2\right)^{\frac{1}{2}}\notag\\&\quad\,\,+\frac{1}{8}\frac{1}{\lambda^{4\Bbbk-d+4}}\int |\widetilde{\mathscr{L}}q_{2\Bbbk-1}|^2+\frac{2}{\lambda^{4\Bbbk-d+4}}\int |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}_1|^2\notag\\ &\le \frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s \xi_L)+\frac{1}{8}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+\frac{C}{\lambda^{4\Bbbk-d+4}}\left(\sqrt{\mathscr{E}_{2\Bbbk}}\|\mathscr{L}^{\Bbbk}\mathcal{F}_0\|_{L^2}+\|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}_1\|_{L^2}^2\right).\label{in energy identity rhs 8 esti}
\end{align}\par
Now substituting (\ref{commutator related estimate}) into (\ref{in energy identity rhs 5 esti}), then gathering estimates (\ref{in energy identity rhs 1 esti})-(\ref{in energy identity rhs 5 esti}) and (\ref{in energy identity rhs 6 esti})-(\ref{in energy identity rhs 8 esti}) into (\ref{energy identity}), we have
\begin{align}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\{\frac{\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+2}}(1+O(b_1))\right\}&\le -\frac{1}{8}\int |\widetilde{\mathscr{L}}_{\lambda}v_{2\Bbbk-1}|^2+\frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s \xi_L)+\frac{Cb_1^2}{\lambda^{4\Bbbk-d+4}}\mathscr{E}_{2\Bbbk}\notag\\ &\quad\,\, +\frac{Cb_1}{\lambda^{4\Bbbk-d+4}}\sqrt{\mathscr{E}_{2\Bbbk}}\left[\left(\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}|^2}{1+y^2}\right)^{\frac{1}{2}}+\left(\int \frac{|\mathscr{L}^{\Bbbk-1}\mathcal{F}|^2}{1+y^4}\right)^{\frac{1}{2}}\right]\notag\\ &\quad\,\,+\frac{C}{\lambda^{4\Bbbk-d+4}}\left(\sqrt{\mathscr{E}_{2\Bbbk}}\|\mathscr{L}^{\Bbbk}\mathcal{F}_0\|_{L^2}+\|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{F}_1\|_{L^2}^2\right).\label{energy identity first sorted}
\end{align}
Note that in (\ref{energy identity first sorted}), integrals containing $\mathcal{F}$ can be controlled by correponding integrals containing $\widetilde{\Psi}_b,$ $\widehat{Mod},$ $\mathcal{H}(q),$ $\mathcal{N}(q).$ The integral containing $\mathcal{F}_0$ can be controlled by corresponding integral containing $\widetilde{\Psi}_b,$ $\widetilde{Mod}:=\widehat{Mod}+\partial_s \xi_L.$ The integral containing $\mathcal{F}_1$ can be controlled by corresponding integral containing $\mathcal{H}(q),$ $\mathcal{N}(q).$ Then we shall proceed to estimate terms in (\ref{energy identity first sorted}) with such further decompositions.\par
Next we estimate $\widetilde{\Psi}_b$ term in (\ref{energy identity first sorted}). Applying (\ref{2 esti of tilde Psib}), we see that
\begin{equation}\label{in energy id esti of widetildePsib term}
\left(\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\widetilde{\Psi}_b|^2}{1+y^2}\right)^{\frac{1}{2}}+\left(\int \frac{|\mathscr{L}^{\Bbbk-1}\widetilde{\Psi}_b|^2}{1+y^4}\right)^{\frac{1}{2}}+\|\mathscr{L}^{\Bbbk}\widetilde{\Psi}_b\|_{L^2}\lesssim b_1^{L+1+(1-\delta)(1+\eta)}.
\end{equation}\par
Next we estimate $\widehat{Mod}$ term in (\ref{energy identity first sorted}). Claim that
\begin{equation}\label{in energy id esti of widehatMod term}
\left(\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\widehat{Mod}|^2}{1+y^2}\right)^{\frac{1}{2}}+\left(\int \frac{|\mathscr{L}^{\Bbbk-1}\widehat{Mod}|^2}{1+y^4}\right)^{\frac{1}{2}}\lesssim b_1^{(1-\delta)(1+\eta)}\left(\frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{L+1+(1-\delta)(1+\eta)}\right).
\end{equation}
It suffices to estimate the second term on the LHS, we shall omit the proof since it is a straight consequence of Proposition \ref{modulation estimates} and degrees of $T_i$ and $S_i.$\par
Next we estimate $\widetilde{Mod}$ term in (\ref{energy identity first sorted}). Claim
\begin{equation}\label{in energy id esti of widetildeMod term}
\left(\int |\mathscr{L}^{\Bbbk}\widetilde{Mod}|^2\right)^{\frac{1}{2}}\lesssim b_1\left(\frac{\sqrt{\mathscr{E}_{2\Bbbk}}}{M^{2\delta}}+b_1^{\eta (1-\delta)}\sqrt{\mathscr{E}_{2\Bbbk}}+b_1^{L+1+(1-\delta)(1+\eta)-c_L\eta}\right).
\end{equation}
We further write
\begin{align*}
\widetilde{Mod}&=-\left(\frac{\lambda_s}{\lambda}+b_1\right)\Lambda \widetilde{Q}_b+\sum\limits_{i=1}^{L-1}[(b_i)_s+(2i-\gamma)b_1b_i-b_{i+1}]\widetilde{T}_i\\ &\quad\,\,+\sum\limits_{i=1}^{L}[(b_i)_s+(2i-\gamma)b_1b_i-b_{i+1}]\sum\limits_{j=i+1}^{L+2}\chi_{B_1}\frac{\partial S_j}{\partial b_i}\\ &\quad\,\,+\left[(b_L)_s+(2L-\gamma)b_1b_L+\partial_s\left\{\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}\right\}\right]\widetilde{T}_L+\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}\partial_s \widetilde{T}_L.
\end{align*}
Note that straight calculation yields
\begin{align*}
&\int |\mathscr{L}^{\Bbbk}\Lambda \widetilde{Q}_b|^2+\sum\limits_{i=1}^L\sum\limits_{j=i+1}^{L+2}\int \left|\mathscr{L}^{\Bbbk}\left(\chi_{B_1}\frac{\partial S_j}{\partial b_i}\right)\right|^2\lesssim b_1^2,\\ &\sum\limits_{i=1}^{L-1}\int |\mathscr{L}^{\Bbbk}\widetilde{T}_i|^2\lesssim b_1^{2(2-\delta)(1+\eta)},\,\,\,\int |\mathscr{L}^{\Bbbk}\widetilde{T}_L|^2\lesssim b_1^{2(1-\delta)(1+\eta)}.
\end{align*}
And by the proof of (\ref{esti for the lhs in the pf of improved bd}), we know that
\begin{equation}\label{estimate of the fraction part}
\left|\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \Lambda Q,\chi_{B_0}\Lambda Q\rangle}\right|\lesssim b_1^{-(1-\delta)}\sqrt{\mathscr{E}_{2\Bbbk}}.
\end{equation}
Also, in view of $|\partial_s \chi_{B_1}|=\left|\frac{(1+\eta)}{2}\frac{y}{B_1}\frac{\partial_s b_1}{b_1}\chi'\left(\frac{y}{B_1}\right)\right|\lesssim 1_{B_1\le y\le 2B_1}b_1,$ we have
\begin{equation*}
\int |\mathscr{L}^{\Bbbk}(\partial_s \widetilde{T}_L)|^2\lesssim b_1^2\int_{B_1\le y\le 2B_1} \frac{y^{d-3}}{y^{4(\Bbbk-L)+2\gamma}}\,\mathrm{d}y\lesssim b_1^{2+2(1-\delta)(1+\eta)}.
\end{equation*}
Then above estimates combined with Proposition \ref{modulation estimates} and Proposition \ref{improved bound for bL prop} gives us (\ref{in energy id esti of widetildeMod term}).\par
Next we estimate $\mathcal{H}(q)$ term in (\ref{energy identity first sorted}). Claim
\begin{equation}\label{in energy id esti of Hq term}
\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{H}(q)|^2}{1+y^2}+\int \frac{|\mathscr{L}^{\Bbbk-1}\mathcal{H}(q)|^2}{1+y^4}+\int |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{H}(q)|^2\lesssim b_1^2\mathscr{E}_{2\Bbbk}.
\end{equation}
It suffices to estimate the third term on the LHS. Denote \begin{equation*}
\mathcal{H}(q)=:\phi q,\,\,\,\text{where}\,\,\,\phi:=\frac{-3(d-2)}{y^2}\widetilde{\Theta}_b[2(Q-1)+\widetilde{\Theta}_b]\,\,\,\text{and}\,\,\,\widetilde{\Theta}_b:=\chi_{B_1}\Theta_{b}.
\end{equation*}
Using (\ref{leibniz for A composite iterate of L}), we get
\begin{equation*}
\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{H}(q)=\sum\limits_{m=0}^{\Bbbk-1}q_{2m+1}\phi_{2\Bbbk-1,2m+1}+q_{2m}\phi_{2\Bbbk-1,2m}.
\end{equation*}
Note that by direct computation, \begin{align*}
|(Q-1)\widetilde{\Theta}_b|&=\left|\left(\sum\limits_{i=1}^L\chi_{B_1}b_iT_i+\sum\limits_{i=2}^{L+2}\chi_{B_1}S_i\right)(Q-1)\right|\\ &\lesssim 1_{y\le 2B_1}\left(\sum\limits_{i=1}^Lb_1^iy^{2i-\gamma}+\sum\limits_{i=2}^{L+2}b_1^iy^{2(i-1)-\gamma}\right)y^{\gamma}\ll 1_{y\le 2B_1}b_1y^{2-\gamma},
\end{align*}
where we also used the fact that
\begin{equation*}
(b_1y^2)^Ny^{-\gamma}\lesssim b_1^{\frac{\gamma}{2}-\eta(N-\frac{\gamma}{2})}\ll 1,\,\,\,\text{for any integer}\,\,\,N\ge 1\,\,\,\text{and any}\,\,\,y\le 2B_1.
\end{equation*}
Then in general,\begin{equation*}
|\phi_{k,i}|\lesssim 1_{y\le 2B_1}\frac{b_1}{1+y^{\gamma+k-i}}.
\end{equation*}
Hence combined with (\ref{coercivity used in energy estimate}), we see that
\begin{align*}
\int |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{H}(q)|^2&\lesssim \sum\limits_{m=0}^{\Bbbk-1}b_1^2\left(\int_{y\le 2B_1}\frac{|q_{2m+1}|^2}{1+y^{2\gamma+4(\Bbbk-m)-4}}+\int \frac{|q_{2m}|^2}{1+y^{2\gamma+4(\Bbbk-m)-2}}\right)\\ &\lesssim b_1^2\left(\sum\limits_{m=0}^{\Bbbk-2}\int \frac{|q_{2m+1}|^2}{y^6(1+y^{4(\Bbbk-2-m)})}+\int \frac{|q_{2\Bbbk-1}|^2}{y^2}+\sum\limits_{m=0}^{\Bbbk-1}\int \frac{|q_{2m}|^2}{y^4(1+y^{4(\Bbbk-1-m)})}\right)\\ &\lesssim b_1^2\mathscr{E}_{2\Bbbk}(q).
\end{align*}
This concludes the proof of (\ref{in energy id esti of Hq term}).\par
Next we estimate $\mathcal{N}(q)$ term in (\ref{energy identity first sorted}). Claim
\begin{equation}\label{in energy id esti of Nq term 1}
\int |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|^2\lesssim b_1^{2L+1+2(1-\delta)(1+\eta)}
\end{equation}
and \begin{equation}\label{in energy id esti of Nq term 2}
\int \frac{|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|^2}{1+y^2}+\int \frac{|\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|^2}{1+y^4}\lesssim b_1^{2L+2+2(1-\delta)(1+\eta)}.
\end{equation}
We shall only prove (\ref{in energy id esti of Nq term 1}) since the proof of (\ref{in energy id esti of Nq term 2}) is similar. Let us estimate the integral for $y\le 1$ and $y\ge 1$ separately.\par
When $y< 1.$ Rewrite \begin{equation*}
\mathcal{N}(q)=\frac{q^2}{y^2}\phi,\,\,\,\text{where}\,\,\,\phi:=\phi_1+\phi_2,\,\,\,\phi_1:=(d-2)(3\widetilde{Q}_b+q),\,\,\,\phi_2:=-3(d-2).
\end{equation*}
By \textnormal{(\romannumeral1)} of Lemma \ref{coercivity-determined esti on q}, we get
\begin{equation*}
\frac{q^2}{y^2}=\frac{1}{y^2}\left(\sum\limits_{i=0}^{\Bbbk-1}c_iT_i(y)+r_q(y)\right)^2=\sum\limits_{i=0}^{\Bbbk-1}\widetilde{c}_iy^{4i+2}+\widetilde{r}_q
\end{equation*}
with\begin{equation*}
|\widetilde{c}_i|\lesssim \mathscr{E}_{2\Bbbk},\,\,\,|\partial_y^j \widetilde{r}_q|\lesssim y^{2\Bbbk+1-\frac{d}{2}-j}|\ln y|^{\Bbbk}\mathscr{E}_{2\Bbbk}.
\end{equation*}
By (\ref{asymp: ground state}), Proposition \ref{first approximation}, \textnormal{(\romannumeral1)} of Lemma \ref{coercivity-determined esti on q} and \textnormal{(\romannumeral3)} of Definition \ref{bootstrap assump}, we have
\begin{equation*}
\phi_1=\sum\limits_{i=0}^{\Bbbk-1}\widehat{c}_iy^{2i+2}+\widehat{r}_q
\end{equation*}
with\begin{equation*}
|\widehat{c}_i|\lesssim 1,\,\,\,|\partial_y^j \widehat{r}_q|\lesssim y^{2\Bbbk+1-\frac{d}{2}-j}|\ln y|^{\Bbbk}.
\end{equation*}
Thus \begin{equation*}
\mathcal{N}(q)=\sum\limits_{i=0}^{\Bbbk-1}\widehat{\widetilde{c}}_iy^{2i+2}+\widehat{\widetilde{r}}_q
\end{equation*}
with\begin{equation*}
|\widehat{\widetilde{c}}_i|\lesssim \mathscr{E}_{2\Bbbk}\,\,\,\text{and}\,\,\,|\partial_y^j \widehat{\widetilde{r}}_q|\lesssim y^{2\Bbbk+1-\frac{d}{2}-j}|\ln y|^{\Bbbk}\mathscr{E}_{2\Bbbk}.
\end{equation*}
Therefore,\begin{align*}
|\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|&=|\mathscr{A}\mathscr{L}^{\Bbbk-1}(\sum\limits_{i=0}^{\Bbbk-1}\widehat{\widetilde{c}}_iy^{2i+2})+\mathscr{A}\mathscr{L}^{\Bbbk-1}\widehat{\widetilde{r}}_q|\\ &\lesssim \sum\limits_{i=0}^{\Bbbk-1}|\widehat{\widetilde{c}}_i|y^3+\sum\limits_{i=0}^{2\Bbbk-1}\frac{|\partial_y^j \widehat{\widetilde{r}}_q|}{y^{2\Bbbk-1-i}} \lesssim y^{-\frac{d}{2}+2}|\ln y|^{\Bbbk}\mathscr{E}_{2\Bbbk}.
\end{align*}
Then by \textnormal{(\romannumeral3)} of Definition \ref{bootstrap assump}, we see that \begin{equation}\label{esti of ALBbbk-1Nq2 integral for y less than 1}
\int_{y<1} |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|^2\lesssim \mathscr{E}_{2\Bbbk}^2 \int_{y<1} y|\ln y|^{2\Bbbk}\,\mathrm{d}y\lesssim \mathscr{E}_{2\Bbbk}^2\lesssim b_1^{4[L+(1-\delta)(1+\eta)]}.
\end{equation}\par
When $y\ge 1,$ rewrite
\begin{equation*}
\mathcal{N}(q)=Z^2\phi,\,\,\,\text{where}\,\,\,Z:=\frac{q}{y},\,\,\,\phi:=(d-2)[3(\widetilde{Q}_b-1)+q].
\end{equation*}
By Leibniz rule, we get \begin{align*}
\int_{y\ge 1} |\mathscr{A}\mathscr{L}^{\Bbbk-1}\mathcal{N}(q)|^2&\lesssim \sum\limits_{k=0}^{2\Bbbk-1}\int_{y\ge 1}\frac{|\partial_y^k\mathcal{N}(q)|^2}{y^{4\Bbbk-2k-2}}\\ &\lesssim \sum\limits_{k=0}^{2\Bbbk-1}\sum\limits_{i=0}^k\int_{y\ge 1}\frac{|\partial_y^iZ^2|^2|\partial_y^{k-i}\phi|^2}{y^{4\Bbbk-2k-2}}\\ &\lesssim \sum\limits_{k=0}^{2\Bbbk-1}\sum\limits_{i=0}^k\sum\limits_{m=0}^i\int_{y\ge 1}\frac{|\partial_y^mZ|^2|\partial_y^{i-m}Z|^2|\partial_y^{k-i}\phi|^2}{y^{4\Bbbk-2k-2}}.
\end{align*}
Then we focus on proving that for $0\le k\le 2\Bbbk-1,$ $0\le i\le k,$ $0\le m\le i,$ \begin{equation}\label{most thought esti in energy estimate}
A_{k,i,m}:=\int_{y\ge 1}\frac{|\partial_y^mZ|^2|\partial_y^{i-m}Z|^2|\partial_y^{k-i}\phi|^2}{y^{4\Bbbk-2k-2}}\lesssim b_1^{2L+1+2(1-\delta)(1+\eta)},
\end{equation}
which would conclude the proof of (\ref{in energy id esti of Nq term 1}). We shall split the proof into three cases as the following three paragraphs.\par
When $k=0.$ In this case, $k=i=m=0.$ Note that $\phi$ is bounded as $y\rightarrow \infty,$ then
\begin{equation*}
A_{0,0,0}=\int_{y\ge 1} \frac{|q|^4|\phi|^2}{y^{4\Bbbk+2}}y^{d-3}\,\mathrm{d}y\lesssim \int_{1\le y\le B_0} \frac{|q|^4}{y^{4\Bbbk+5-d}}\,\mathrm{d}y+\int_{y>B_0} \frac{|q|^4}{y^{4\Bbbk+5-d}}\,\mathrm{d}y.
\end{equation*}
By \textnormal{(\romannumeral4)} of Lemma \ref{coercivity-determined esti on q}, Definition \ref{bootstrap assump} and recall that $d=4\hbar+4\delta+2\gamma+2,$ we have
\begin{align}
\int_{1\le y\le B_0} \frac{|q|^4}{y^{4\Bbbk+5-d}}\,\mathrm{d}y&\lesssim \left\|\frac{y^{d-4}|q|^2}{y^{2(2\Bbbk-1)}}\right\|_{L^{\infty}(y\ge 1)}\left\|\frac{y^{d-4}|q|^2}{y^{2(2l+2\hbar+3)}}\right\|_{L^{\infty}(y\ge 1)}\int_{1\le y\le B_0} y^{4l-4\delta-2\gamma+5}\,\mathrm{d}y\notag\\ &\lesssim \mathscr{E}_{2\Bbbk}\mathscr{E}_{2(l+\hbar+2)}B_0^{4l-4\delta-2\gamma+6}\lesssim b_1^{(1+\gamma-K\eta)+2L+2(1-\delta)(1+\eta)}.\label{esti of A000 1}
\end{align}
Similarly,
\begin{align}
\int_{y>B_0}\frac{|q|^4}{y^{4\Bbbk+5-d}}\,\mathrm{d}y&\lesssim \left\|\frac{y^{d-4}|q|^2}{y^{2(2\Bbbk-2l-1)}}\right\|_{L^{\infty}(y\ge1)}\left\|\frac{y^{d-4}|q|^2}{y^{2(2l+2\hbar+1)}}\right\|_{L^{\infty}(y\ge1)}\int_{y>B_0}y^{-4\delta-2\gamma+1}\,\mathrm{d}y\notag\\ &\lesssim \mathscr{E}_{2(\Bbbk-l)}\mathscr{E}_{2(l+\hbar+1)}B_0^{-4\delta-2\gamma+2}\lesssim b_1^{2L+2(1-\delta)(1+\eta)+(1+\gamma)-2(K+1-\delta)\eta}.\label{esti of A000 2}
\end{align}\par
When $k\ge 1$ and $i=k.$ By Leibniz rule,
\begin{equation*}
|\partial_y^n Z|^2\lesssim \sum\limits_{j=0}^n\frac{|\partial_y^j q|^2}{y^{2+2n-2j}},\,\,\,\text{for all}\,\,\,n\in \mathbb{N}.\Longrightarrow A_{k,k,m}\lesssim \sum\limits_{j=0}^m\sum\limits_{l=0}^{k-m}\int_{y\ge 1}\frac{|\partial_y^j q|^2|\partial_y^l q|^2}{y^{4\Bbbk-2j-2l+2}}.
\end{equation*}
Direct computation implies
\begin{align*}
B_{j,l}:&=\int_{y\ge 1}\frac{|\partial_y^j q|^2|\partial_y^l q|^2}{y^{4\Bbbk-2j-2l+2}}y^{d-3}\,\mathrm{d}y\\&=\int_{1\le y\le B_0}\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{4\Bbbk-2j-2l+4\hbar+6}}y^{7-4\delta-2\gamma}\,\mathrm{d}y+\int_{y>B_0}\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{4\Bbbk-2j-2l+4\hbar}}y^{-(4\delta+2\gamma-1)}\,\mathrm{d}y\\ &\lesssim \left\|\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{4\Bbbk-2j-2l+4\hbar+6}}\right\|_{L^{\infty}(y\ge 1)}b_1^{2\delta+\gamma-4}+\left\|\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{4\Bbbk-2j-2l+4\hbar}}\right\|_{L^{\infty}(y\ge 1)}b_1^{2\delta+\gamma-1}\\ &=\left\|\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{2J_1+2J_2-2j-2l}}\right\|_{L^{\infty}(y\ge 1)}b_1^{2\delta+\gamma-4}+\left\|\frac{(y^{d-4}|\partial_y^j q|^2)(y^{d-4}|\partial_y^l q|^2)}{y^{2J_3+2J_4-2j-2l}}\right\|_{L^{\infty}(y\ge 1)}b_1^{2\delta+\gamma-1}\\ &=:B_{j,l,J_1,J_2}b_1^{2\delta+\gamma-4}+B_{j,l,J_3,J_4}b_1^{2\delta+\gamma-1},
\end{align*}
where $J_1+J_2=2\Bbbk+2\hbar+3,$ $J_3+J_4=2\Bbbk+2\hbar,$ and those $J$ are to be determined. In fact, we choose $J_1=2\Bbbk-2l+3,$ $J_2=2\hbar+2l.$ Then by \textnormal{(\romannumeral4)} of Lemma \ref{coercivity-determined esti on q} and Definition \ref{bootstrap assump}, we see that
\begin{align*}
B_{j,l,J_1,J_2}&\lesssim \left\|\frac{y^{d-4}|\partial_y^j q|^2}{y^{2J_1-2j}}\right\|_{L^{\infty}(y\ge 1)}\left\|\frac{y^{d-4}|\partial_y^l q|^2}{y^{2J_2-2l}}\right\|_{L^{\infty}(y\ge 1)}\\ &\lesssim \mathscr{E}_{J_1+1}\sqrt{\mathscr{E}_{J_2}\mathscr{E}_{J_2+2}}\\ &\lesssim b_1^{2L-l+3(1-\delta)+4-\frac{3}{2}K\eta+\frac{l}{2l-\gamma}(2l-2\delta-\gamma)}\lesssim b_1^{2L+4(1-\delta)+3-\frac{\gamma}{2}-\frac{3}{2}K\eta},
\end{align*}
where in the last inequality we used the fact that $\frac{l}{2l-\gamma}>\frac{1}{2}.$ Similarly, we have
\begin{equation*}
B_{j,l,J_3,J_4}\lesssim \sqrt{\mathscr{E}_{J_3}\mathscr{E}_{J_3+2}\mathscr{E}_{J_4}\mathscr{E}_{J_4+2}}\lesssim b_1^{2L+4(1-\delta)-\frac{\gamma}{2}-\frac{3}{2}K\eta},
\end{equation*}
where we choose $J_3=2\Bbbk-2l,$ $J_4=2\hbar+2l.$ Hence
\begin{equation}\label{esti of B_{j,l}}
B_{j,l}\lesssim b_1^{2L+1+2(1-\delta)(1+\eta)+\frac{\gamma}{2}-\left[\frac{3}{2}K+2(1-\delta)\right]\eta}.
\end{equation}\par
When $k\ge 1$ and $i\le k-1.$ Again by Leibniz rule, we further write
\begin{equation*}
A_{k,i,m}\lesssim \sum\limits_{j=0}^m\sum\limits_{l=0}^{i-m}\int_{y\ge 1}\frac{|\partial_y^j q|^2|\partial_y^l q|^2||\partial_y^{k-i} \phi|^2}{y^{4\Bbbk-2j-2l+2-2(k-i)}}.
\end{equation*}
We shall need pointwise estimate of $\partial_y^n \phi,$ for $n\in \mathbb{N}_{+}.$ Note that by degrees of $T_k$ and $S_k$, we get
\begin{align*}
|\partial_y^n \widetilde{Q}_b|&=\left|\partial_y^n\left(Q+\sum\limits_{k=1}^L\chi_{B_1}b_kT_k+\sum\limits_{k=2}^{L+2}\chi_{B_1}S_k\right)\right|\\ &\lesssim \frac{1}{y^{\gamma+n}}+\sum\limits_{k=1}^L\frac{b_1^ky^{2k}}{y^{\gamma+n}}1_{y\le 2B_1}\lesssim \frac{b_1^{-\eta(L+1)}}{y^{\gamma+n}}.
\end{align*}
By \textnormal{(\romannumeral4)} of Lemma \ref{coercivity-determined esti on q} and Definition \ref{bootstrap assump}, we have
for $1\le y\le B_0,$ \begin{equation*}
|\partial_y^n q|^2\lesssim y^{2(2\Bbbk-1-n)}\left|\frac{\partial_y^n q}{y^{2\Bbbk-1-n}}\right|^2\lesssim y^{2(2\Bbbk-1-n)-(d-4)}\mathscr{E}_{2\Bbbk}\lesssim b_1^{\eta+\gamma+2(1-\delta)\eta},
\end{equation*}
for $y\ge B_0,$ \begin{align*}
|\partial_y^n q|^2&\lesssim y^{2(2\hbar+2l+1-n)}\left|\frac{\partial_y^n q}{y^{2\hbar+2l+1-n}}\right|^2\\&\lesssim y^{2(2\hbar+2l+1-n)-(d-4)}\mathscr{E}_{2\hbar+2l+2}\lesssim y^{4l+4(1-\delta)}b_1^{n+\gamma+2l+2(1-\delta)-K\eta}.
\end{align*}
Thus \begin{equation*}
|\partial_y^n \phi|^2\lesssim |\partial_y^n \widetilde{Q}_b|^2+|\partial_y^n q|^2\lesssim \left\{\begin{aligned}
&\frac{b_1^{-2(L+1)\eta}}{y^{2\gamma+2n}},\,\,\,\text{when}\,\,\,1\le y\le B_0.\\ &b_1^{-C_{L,K}\eta+\gamma+n}(b_1y^2)^{2l+2(1-\delta)},\,\,\,\text{when}\,\,\,y\ge B_0.
\end{aligned}\right.
\end{equation*}
Then similar to the proof of (\ref{esti of B_{j,l}}), we see that
\begin{align}
A_{k,i,m}&\lesssim b_1^{-C_{L,K}\eta}\sum_{j=0}^m\sum_{l=0}^{i-m}\Bigg(\int_{1\le y\le B_0}\frac{|\partial_y^j q|^2|\partial_y^l q|^2}{y^{4\Bbbk-2j-2l+2+2\gamma}}y^{d-3}\,\mathrm{d}y\notag\\&\quad\,\,+b_1^{\gamma+\alpha}\int_{y>B_0}\frac{|\partial_y^j q|^2|\partial_y^l q|^2}{y^{4\Bbbk-2j-2l+2-2\alpha}}y^{d-3}\,\mathrm{d}y\Bigg)\notag\\&\lesssim b_1^{2L+1+2(1-\delta)(1+\eta)+\frac{\gamma}{2}-C_{K,L,\delta}\eta},\label{estimate of Akim}
\end{align}
where $\alpha:=k-i+2l+2(1-\delta).$\par
In view of (\ref{esti of A000 1})-(\ref{estimate of Akim}), we conclude the proof of (\ref{most thought esti in energy estimate}), then by (\ref{esti of ALBbbk-1Nq2 integral for y less than 1}) and (\ref{most thought esti in energy estimate}), we complete the proof of (\ref{in energy id esti of Nq term 1}).\par
It remains to estimate the integral $\frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s \xi_L)$ in (\ref{energy identity first sorted}). Let us further write
\begin{align}
\frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s \xi_L)&=\frac{\mathrm{d}}{\mathrm{d}s}\frac{1}{\lambda^{4\Bbbk-d+4}}\left(\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}\xi_L-\frac{1}{2}\int |\mathscr{L}^{\Bbbk}\xi_L|^2\right)\notag\\ &\quad\,\,+\frac{(4\Bbbk-d+4)}{\lambda^{4\Bbbk-d+4}}\frac{\lambda_s}{\lambda}\left(\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}\xi_L-\frac{1}{2}\int |\mathscr{L}^{\Bbbk}\xi_L|^2\right)\notag\\ &\quad\,\,-\frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}(\partial_s q-\partial_s \xi_L) \mathscr{L}^{\Bbbk}\xi_L.\label{expreesion of the oscilation integral}
\end{align}
Recall the proof of (\ref{in energy id esti of widetildeMod term}), we deduce that \begin{equation}\label{esti of L2 norm of LBbbkxiL}
\int |\mathscr{L}^{\Bbbk}\xi_L|^2\lesssim b_1^{2(1-\delta)\eta}\mathscr{E}_{2\Bbbk}.
\end{equation}
Then by H\"older, we get \begin{equation*}
\left|\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}\xi_L\right|\lesssim b_1^{(1-\delta)\eta}\mathscr{E}_{2\Bbbk}.
\end{equation*}
Thus \begin{equation}\label{first line on RHS of the expreesion of oscilation integral}
\frac{\mathrm{d}}{\mathrm{d}s}\frac{1}{\lambda^{4\Bbbk-d+4}}\left(\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}\xi_L-\frac{1}{2}\int |\mathscr{L}^{\Bbbk}\xi_L|^2\right)=\frac{\mathrm{d}}{\mathrm{d}t}\left\{\frac{\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+2}}O(b_1^{(1-\delta)\eta})\right\},
\end{equation}
and \begin{equation}\label{second line on RHS of the expreesion of oscilation integral}
\left|\frac{(4\Bbbk-d+4)}{\lambda^{4\Bbbk-d+4}}\frac{\lambda_s}{\lambda}\left(\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}\xi_L-\frac{1}{2}\int |\mathscr{L}^{\Bbbk}\xi_L|^2\right)\right|\lesssim \frac{b_1^{1+(1-\delta)\eta}\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+4}}.
\end{equation}\par
On the third line on the RHS of (\ref{expreesion of the oscilation integral}). By (\ref{eq: eq of q}), we further write
\begin{align}
&\quad\,\,\int \mathscr{L}^{\Bbbk}(\partial_s q-\partial_s \xi_L) \mathscr{L}^{\Bbbk}\xi_L\notag\\ &=\frac{\lambda_s}{\lambda}\int \Lambda q\mathscr{L}^{2\Bbbk}\xi_L-\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk+1}\xi_L+\int \mathscr{L}^{\Bbbk}(-\widetilde{\Psi}_b-\widetilde{Mod}+\mathcal{H}(q)-\mathcal{N}(q))\mathscr{L}^{\Bbbk}\xi_L.\label{expression of the third line on RHS of oscilation integral}
\end{align}
By H\"older, (\ref{coercivity used in modulation estimates}) and (\ref{estimate of the fraction part}), we have
\begin{align}
&\quad\,\,\left|\frac{\lambda_s}{\lambda}\int \Lambda q\mathscr{L}^{2\Bbbk}\xi_L\right|\notag\\&\lesssim b_1\left(\frac{|\partial_y q|^2}{1+y^{4\Bbbk-2}}\right)^{\frac{1}{2}}\left(\int y^2(1+y^{4\Bbbk-2})\left|\mathscr{L}^{2\Bbbk}\left(\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}(1-\chi_{B_1})T_L\right)\right|^2\right)^{\frac{1}{2}} \lesssim b_1^{1+\eta(1-\delta)}\mathscr{E}_{2\Bbbk}.\label{1 esti of expression of the third line on RHS of oscilation integral}
\end{align}
Again by (\ref{estimate of the fraction part}),
\begin{equation*}
\int |\mathscr{L}^{\Bbbk+1}\xi_L|^2\lesssim \left|\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right|^2\int \left|\mathscr{L}^{\Bbbk+1}\left((1-\chi_{B_1})T_L\right)\right|^2\lesssim b_1^{2+2(2-\delta)\eta}\mathscr{E}_{2\Bbbk},
\end{equation*}
combined with H\"older, we see that
\begin{equation}\label{2 esti of expression of the third line on RHS of oscilation integral}
\left|\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk+1}\xi_L\right|\lesssim b_1^{1+(2-\delta)\eta}\mathscr{E}_{2\Bbbk}.
\end{equation}
By H\"older, (\ref{in energy id esti of widetildePsib term}), (\ref{in energy id esti of widetildeMod term}) and (\ref{esti of L2 norm of LBbbkxiL}), we get \begin{align}
\left|\int \mathscr{L}^{\Bbbk}(-\widetilde{\Psi}_b-\widetilde{Mod})\mathscr{L}^{\Bbbk}\xi_L \right|&\lesssim \left(\int |\mathscr{L}^{\Bbbk}(\widetilde{\Psi}_b+\widetilde{Mod})|^2\right)^{\frac{1}{2}}\left(\int |\mathscr{L}^{\Bbbk}\xi_L|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim b_1^{\eta(1-\delta)+L+1+(1-\delta)(1+\eta)}\sqrt{\mathscr{E}_{2\Bbbk}}+b_1^{1+(1-\delta)\eta}\mathscr{E}_{2\Bbbk}.\label{3 esti of expression of the third line on RHS of oscilation integral}
\end{align}
Similarly, by H\"older, (\ref{in energy id esti of Hq term}), (\ref{in energy id esti of Nq term 2}) and (\ref{esti of L2 norm of LBbbkxiL}), we have
\begin{align}
\left|\int \mathscr{L}^{\Bbbk}(\mathcal{H}(q)-\mathcal{N}(q))\mathscr{L}^{\Bbbk}\xi_L\right|&\lesssim \left(\int \frac{|\mathscr{L}^{\Bbbk-1}(\mathcal{H}(q)-\mathcal{N}(q))|^2}{1+y^4}\right)^{\frac{1}{2}}\left(\int (1+y^4)|\mathscr{L}^{\Bbbk+1}\xi_L|^2\right)^{\frac{1}{2}}\notag\\ &\lesssim b_1^{1+\eta(1-\delta)}\mathscr{E}_{2\Bbbk}+b_1^{\eta(1-\delta)+L+1+(1-\delta)(1+\eta)}\sqrt{\mathscr{E}_{2\Bbbk}}.\label{4 esti of expression of the third line on RHS of oscilation integral}
\end{align}
Substituting (\ref{1 esti of expression of the third line on RHS of oscilation integral})-(\ref{4 esti of expression of the third line on RHS of oscilation integral}) into (\ref{expression of the third line on RHS of oscilation integral}), and then taking (\ref{first line on RHS of the expreesion of oscilation integral})-(\ref{expression of the third line on RHS of oscilation integral}) into (\ref{expreesion of the oscilation integral}), we see that
\begin{align}
\frac{1}{\lambda^{4\Bbbk-d+4}}\int \mathscr{L}^{\Bbbk}q\mathscr{L}^{\Bbbk}(\partial_s\xi_L)&=\frac{\mathrm{d}}{\mathrm{d}t}\left\{\frac{\mathscr{E}_{2\Bbbk}}{\lambda^{4\Bbbk-d+2}}O(b_1^{\eta(1-\delta)})\right\}\notag\\ &\quad\,\,+\frac{b_1}{\lambda^{4\Bbbk-d+4}}O(b_1^{\eta(1-\delta)}\mathscr{E}_{2\Bbbk}+b_1^{\eta(1-\delta)}b_1^{L+(1-\delta)(1+\eta)}\sqrt{\mathscr{E}_{2\Bbbk}}).\label{estimate of oscilation integral}
\end{align}\par
Now we substituting (\ref{boootstrap assump on E2Bbbk}) in the first term in the second line of (\ref{estimate of oscilation integral}), then collecting estimates (\ref{in energy id esti of widetildePsib term})-(\ref{in energy id esti of widetildeMod term}), (\ref{in energy id esti of Hq term})-(\ref{in energy id esti of Nq term 2}) and (\ref{estimate of oscilation integral}) into (\ref{energy identity first sorted}), we get the desired estimate (\ref{monotonicity for E2Bbbk}), this concludes the proof.
\end{pf}
\section{Improved bootstrap, transverse crossing and conclusion}
Next we prove improved bootstrap estimates as the following.
\begin{prop}\label{improved bootstrap}
Given initial data as in Definition \ref{def of initial data}, assume for some large universal constant $K$ there is $s_0(K)\gg 1$ such that $(b(s),q(s))\in \mathcal{S}_K(s)$ on $s\in [s_0,s_1]$ for some $s_1\ge s_0.$ Then for all $s\in [s_0,s_1],$
\begin{align}
|\mathcal{V}_1(s)|&\le s^{-\frac{\eta}{2}(1-\delta)},\label{improved esti for V1}\\
|b_k(s)|&\lesssim s^{-(k+\eta(1-\delta))},\label{improved esti for bk from l+1 to L}\\
\mathscr{E}_{2m}&\le \left\{\begin{aligned}
&\frac{1}{2}Ks^{-\frac{l(4m-d+2)}{2l-\gamma}},\,\,\, \hbar+2\le m\le l+\hbar,\\
&\frac{1}{2}Ks^{-[2(m-\hbar-1)+2(1-\delta)-K\eta]},\,\,\, l+\hbar+1\le m\le \Bbbk-1,
\end{aligned}\right. \label{improved esti for E2m lower}\\
\mathscr{E}_{2\Bbbk}&\le \frac{1}{2}Ks^{-[2L+2(1-\delta)(1+\eta)]}.\label{improved esti for E2Bbbk}
\end{align}
\end{prop}
\begin{pf}
\normalfont
In order to make use of Proposition \ref{energy estimates}, let us firstly estimate $\lambda$ in the variable $s.$ By Lemma \ref{linearization of bk from 1 to l} and Definition \ref{bootstrap assump}, we have
\begin{equation*}
b_1(s)=\frac{l}{2l-\gamma}\frac{1}{s}+\frac{\mathcal{U}_1}{s}=\frac{l}{2l-\gamma}\frac{1}{s}+O\left(\frac{1}{s^{1+\frac{\eta(1-\delta)}{2}}}\right),
\end{equation*}
combined with (\ref{modu esti for lambda and bk from 1 to L-1}), we see that
\begin{equation*}
-\frac{\lambda_s}{\lambda}=b_1+O(b_1^{L+1+(1-\delta)(1+\eta)})=\frac{l}{2l-\gamma}\frac{1}{s}+O\left(\frac{1}{s^{1+\frac{\eta(1-\delta)}{2}}}\right).
\end{equation*}
Or say \begin{equation}\label{dynamic estimate of lambda}
\left|\partial_s\ln\left(s^{\frac{l}{2l-\gamma}}\lambda\right)\right|\lesssim \frac{1}{s^{1+\frac{\eta(1-\delta)}{2}}}.
\end{equation}
Integrating (\ref{dynamic estimate of lambda}) from $s_0$ to $s$ gives
\begin{equation*}
e^{-s_0^{-\frac{\eta(1-\delta)}{2}}}\lesssim \frac{\lambda(s)s^{\frac{l}{2l-\gamma}}}{s_0^{\frac{l}{2l-\gamma}}}\lesssim e^{s_0^{-\frac{\eta(1-\delta)}{2}}},
\end{equation*}
thus\begin{equation}\label{estimate of lambda in s}
\lambda(s)\simeq \left(\frac{s_0}{s}\right)^{\frac{l}{2l-\gamma}}.
\end{equation}\par
Improved bound for $\mathscr{E}_{2\Bbbk}:$ Integrating (\ref{monotonicity for E2Bbbk}) from $s_0$ to $s$ gives
\begin{align*}
\frac{\mathscr{E}_{2\Bbbk}(s)}{\lambda(s)^{4\Bbbk-d+2}}\left(1+O(b_1(s)^{\eta(1-\delta)})\right)&\lesssim \mathscr{E}_{2\Bbbk}(s_0)\left(1+O(b_1(s_0)^{\eta(1-\delta)})\right)\\ &\quad\,\,+\int_{s_0}^s \frac{b_1(\tau)}{\lambda(\tau)^{4\Bbbk-d+4}}\Bigg(b_1(\tau)^{L+(1-\delta)(1+\eta)}\sqrt{\mathscr{E}_{2\Bbbk}(\tau)}\\&\quad\,\,+\frac{\mathscr{E}_{2\Bbbk}(\tau)}{M^{2\delta}}+b_1(\tau)^{2L+2(1-\delta)(1+\eta)}\Bigg)\,\mathrm{d}\tau,
\end{align*}
where we used the assumption $\lambda(s_0)=1.$ Then applying (\ref{estimate of lambda in s}), $b_1(s)\simeq \frac{c_1}{s},$ Definition \ref{def of initial data} and Definition \ref{bootstrap assump}, and for convenience discarding $\frac{1}{\lambda^2}$ in the integral (note that $\frac{1}{\lambda(\tau)^2}\lesssim \left(\frac{s_1}{s_0}\right)^{\frac{l}{2l-\gamma}}\lesssim 1$), we get
\begin{align*}
\mathscr{E}_{2\Bbbk}(s)&\lesssim s_0^{-\frac{l}{2l-\gamma}[6L-4(1-\delta)+2\gamma]}s^{-\frac{l}{2l-\gamma}[4L+4(1-\delta)-2\gamma]}\\&\quad\,\,+\left(\sqrt{K}+\frac{K}{M^{2\delta}}+1\right)s^{-\frac{l}{2l-\gamma}(4\Bbbk-d+2)}\int_{s_0}^s \tau^{-(2L+1+2(1-\delta)(1+\eta))+\frac{l}{2l-\gamma}(4\Bbbk-d+2)}\,\mathrm{d}\tau\\&\lesssim s_0^{-(3L+\gamma)}s^{-(2L-\gamma)}+\left(\sqrt{K}+\frac{K}{M^{2\delta}}+1\right)s^{-(2L+2(1-\delta)(1+\eta))} \le \frac{K}{2}s^{-(2L+(1-\delta)(1+\eta))},
\end{align*}
where in the second inequality we used the fact that
\begin{align*}
&\quad\,\,\frac{l}{2l-\gamma}(4\Bbbk-d+2)-2L-2(1-\delta)(1+\eta)\\&=2\left(\frac{2l}{2l-\gamma}-1\right)L+2\left(\frac{2l}{2l-\gamma}-(1+\eta)\right)(1-\delta)-\frac{2l}{2l-\gamma}\gamma>0.
\end{align*}\par
Improved bound for $\mathscr{E}_{2m}$ with $\hbar+2\le m\le l+\hbar$ : Similarly, one integrates (\ref{monotoncity for E2m}) from $s_0$ to $s,$ then applying (\ref{estimate of lambda in s}), $b_1(s)\simeq \frac{c_1}{s},$ Definition \ref{def of initial data} and Definition \ref{bootstrap assump}, and discarding $\frac{1}{\lambda^2}$ in the integral, after some direct computations which we shall omit, one gets $\mathscr{E}_{2m}(s)\le \frac{K}{2}s^{-\frac{l}{2l-\gamma}(4m-d+2)}.$ For the same reason, we omit the proof of improved bound for $\mathscr{E}_{2m}$ with $l+\hbar+1\le m\le \Bbbk-1.$\par
Improved bound for $b_k$ with $l+1\le k\le L$ : We aim to prove (\ref{improved esti for bk from l+1 to L}) by induction on $k.$ When $k=L,$ denote\begin{equation*}
\widetilde{b}_L:=b_L+\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}.
\end{equation*}
Note that by (\ref{estimate of the fraction part}) and (\ref{boootstrap assump on E2Bbbk}), we have \begin{equation}\label{fraction part detailed}
\left|\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right|\lesssim b_1^{L+\eta(1-\delta)},
\end{equation}
Hence $|\widetilde{b}_L|\lesssim b_1^L.$ Straight calculation gives
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d}s}\frac{\widetilde{b}_L(s)}{\lambda(s)^{2L-\gamma}}=\frac{1}{\lambda^{2L-\gamma}}\left[(\widetilde{b}_L)_s+(2L-\gamma)b_1\widetilde{b}_L-(2L-\gamma)\left(\frac{\lambda_s}{\lambda}+b_1\right)\widetilde{b}_L\right].
\end{equation*}
Note that by (\ref{improved estimate for b_L}), (\ref{boootstrap assump on E2Bbbk}) and (\ref{fraction part detailed}), we see that \begin{align*}
|(\widetilde{b}_L)_s+(2L-\gamma)b_1\widetilde{b}_L|&=\left|(b_L)_s+\partial_s \frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}+(2L-\gamma)b_1b_L+(2L-\gamma)b_1\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right|\\ &\lesssim \frac{1}{B_0^{2\delta}}\left(c(M)\sqrt{\mathscr{E}_{2\Bbbk}}+b_1^{L+1+(1-\delta)-C\eta}\right)+b_1^{1+L+\eta(1-\delta)}\lesssim b_1^{L+1+\eta(1-\delta)}.
\end{align*}
Then by (\ref{modu esti for lambda and bk from 1 to L-1}), we get
\begin{equation}\label{dynamic for widetildebL}
\left|\frac{\mathrm{d}}{\mathrm{d}s}\frac{\widetilde{b}_L(s)}{\lambda(s)^{2L-\gamma}}\right|\lesssim \frac{b_1^{L+1+\eta(1-\delta)}}{\lambda^{2L-\gamma}}.
\end{equation}
Then integrating (\ref{dynamic for widetildebL}) from $s_0$ to $s,$ applying (\ref{estimate of lambda in s}), Definition \ref{def of initial data}, $b_1(s)\simeq \frac{c_1}{s}$ and the fact that \begin{equation*}
-(L+1+\eta(1-\delta))+(2L-\gamma)\frac{l}{2l-\gamma}=\frac{l\gamma}{2l-\gamma}\left(L-1-\frac{1}{2}\left(1-\frac{1}{l}\right)\gamma\right)-\eta(1-\delta)-1>-1,
\end{equation*}
we see $|\widetilde{b}_L|\lesssim s^{-L-\eta(1-\delta)}.$ Thus $|b_L|\lesssim |\widetilde{b}_L|+\left|\frac{\langle \mathscr{L}^L q,\chi_{B_0}\Lambda Q\rangle}{\langle \chi_{B_0}\Lambda Q,\Lambda Q\rangle}\right|\lesssim s^{-L-\eta(1-\delta)}.$ Assuming (\ref{improved esti for bk from l+1 to L}) holds for $k+1,$ we aim to show the case for $k.$ By induction hypothesis and (\ref{modu esti for lambda and bk from 1 to L-1}), we have
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}s}\frac{b_k(s)}{\lambda^{2k-\gamma}}&=\frac{1}{\lambda^{2k-\gamma}}\left((b_k)_s+(2k-\gamma)b_1b_k-b_{k+1}-(2k-\gamma)\left(\frac{\lambda_s}{\lambda}+b_1\right)b_k+b_{k+1}\right)\notag\\ &\lesssim \frac{1}{\lambda^{2k-\gamma}}(b_1^{L+1+(1-\delta)(1+\eta)}+b_1^{k+L+1+(1-\delta)(1+\eta)}+b_1^{k+1+\eta(1-\delta)})\lesssim \frac{b_1^{k+1+\eta(1-\delta)}}{\lambda^{2k-\gamma}}.\label{dynamic of b_k for l+1lekleL-1}
\end{align}
Similarly, integrating (\ref{dynamic of b_k for l+1lekleL-1}) from $s_0$ to $s$, then applying (\ref{estimate of lambda in s}), Definition \ref{def of initial data} and the fact that \begin{equation*}
-(k+1+\eta(1-\delta))+\frac{l}{2l-\gamma}(2k-\gamma)=\frac{\gamma}{2l-\gamma}(k-l)-\eta(1-\delta)-1>-1,
\end{equation*}
we see that $|b_k(s)|\lesssim s^{-k-\eta(1-\delta)},$ this consludes the proof of (\ref{improved esti for bk from l+1 to L}).\par
Improved bound for $\mathcal{V}_1(s)$ : By direct computation, for any $1\le k\le l,$
\begin{equation*}
s(\mathcal{V}_k)_s=\sum\limits_{j=1}^{l-1}(P_l)_{k,j}[s(\mathcal{U}_j)_s-(A_l\mathcal{U})_j]+(P_l)_{k,l}[s(\mathcal{U}_l)_s-(A_l\mathcal{U})_l]+(D_l\mathcal{V})_k.
\end{equation*}
Note that by Lemma \ref{linearization of bk from 1 to l}, Proposition \ref{modulation estimates}, Definition \ref{bootstrap assump} and (\ref{improved esti for bk from l+1 to L}), we get
\begin{align*}
|s(\mathcal{U}_j)_s-(A_l\mathcal{U})_j|&\lesssim s^{j+i}[(b_j)_s+(2j-\gamma)b_1b_j-b_{j+1}]+O(|\mathcal{U}|^2)\\&\lesssim s^{-\eta(1-\delta)},\\ |s(\mathcal{U}_l)_s-(A_l\mathcal{U})_l|&\lesssim s^{l+1}[(b_l)_s+(2l-\gamma)b_1b_l-b_{l+1}]+s^{l+1}b_{l+1}+O(|\mathcal{U}|^2)\\ &\lesssim s^{-\eta(1-\delta)}.
\end{align*}
Therefore,
\begin{equation}\label{dynamic for mathcalV}
s\mathcal{V}_s=D_l\mathcal{V}+O(s^{-\eta(1-\delta)}).
\end{equation}
In particular,\begin{equation}\label{dynamic for mathcalV1}
|(s\mathcal{V}_1)_s|\lesssim s^{-\eta(1-\delta)}.
\end{equation}
Integrating (\ref{dynamic for mathcalV1}) from $s_0$ to $s$, then applying Definition \ref{def of initial data}, we see that \begin{equation*}
\left(\frac{s_0}{s}\right)^{1-\frac{\eta(1-\delta)}{2}}-Cs^{-\frac{\eta(1-\delta)}{2}}\le s^{\frac{\eta(1-\delta)}{2}}\mathcal{V}_1(s)\le \left(\frac{s_0}{s}\right)^{1-\frac{\eta(1-\delta)}{2}}+Cs^{-\frac{\eta(1-\delta)}{2}}
\end{equation*}
for $s_0<s\le s_1.$ This concludes the proof of (\ref{improved esti for V1}).
\end{pf}
Next we give the reduction to a finite-dimensional problem and transverse crossing property.
\begin{prop}\label{reduction to finite dim and transverse crossing}
There exists $K_1\ge 1$ such that for any $K\ge K_1$, there exists $s_{0,1}(K)>1$ such that for all $s_0\ge s_{0,1}(K)$ the following holds. Given initial data at $s=s_0$ as in Definition \ref{def of initial data}, if $(b(s),q(s))\in \mathcal{S}_K(s)$ for all $s\in [s_0,s_1]$ with $(b(s),q(s))\in \partial \mathcal{S}_K(s_1)$ for some $s_1\ge s_0,$ then\\
\textnormal{(\romannumeral1)} Reduction to a finite-dimensional problem: \begin{equation}\label{reduction to a finite dim pb}
s_1^{\frac{\eta}{2}(1-\delta)}(\mathcal{V}_2(s_1),\cdots,\mathcal{V}_l(s_1))\in \partial \mathcal{B}_{l-1}(0,1).
\end{equation}
\textnormal{(\romannumeral2)} Transverse crossing:
\begin{equation}\label{transverse crossing}
\frac{\mathrm{d}}{\mathrm{d}s}\bigg\vert_{s=s_1} \sum\limits_{i=2}^l |s^{\frac{\eta}{2}(1-\delta)}\mathcal{V}_i(s)|^2>0.
\end{equation}
\end{prop}
\begin{pf}
\normalfont
\textnormal{(\romannumeral1)} is a direct consequence of Proposition \ref{improved bootstrap}. Let us prove \textnormal{(\romannumeral2)}. Note that by (\ref{dynamic for mathcalV}),
\begin{equation*}
s(\mathcal{V}_i)_s=\frac{i\gamma}{2l-\gamma}\mathcal{V}_i+O(s^{-\eta(1-\delta)}),\,\,\,\text{for}\,\,\,2\le i\le l.
\end{equation*}
Combined with Definition \ref{bootstrap assump}, we have
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}s}\Bigg|_{s=s_1}\sum\limits_{i=2}^l\left|s^{\frac{\eta(1-\delta)}{2}}\mathcal{V}_i(s)\right|^2&=2s^{\eta(1-\delta)-1}\sum\limits_{i=2}^l\left(\frac{\eta(1-\delta)}{2}\mathcal{V}_i^2+s(\mathcal{V}_i)_s\mathcal{V}_i\right)\\ &=2s^{\eta(1-\delta)-1}\left\{\sum\limits_{i=2}^l\left(\frac{i\gamma}{2l-\gamma}+\frac{\eta(1-\delta)}{2}\right)\mathcal{V}_i^2+O(s^{-\frac{3\eta(1-\delta)}{2}})\right\}\\ &\ge \frac{C}{s}\sum\limits_{i=2}^l\left|s^{\frac{\eta(1-\delta)}{2}}\mathcal{V}_i(s)\right|^2\Bigg|_{s=s_1}+O(s^{-\frac{\eta(1-\delta)}{2}-1})\gtrsim \frac{1}{s}>0.
\end{align*}
\end{pf}
Then we are in place to show the existence of solutions to (\ref{eq: renormalized flow}) that are trapped in $\mathcal{S}_K(s)$ for any large $s.$
\begin{prop}\label{existence od sol trapped for large rescaled time}
There exists $K_2\ge 1$ such that for any $K\ge K_2,$ there exists $s_{0,2}(K)>1$ such that for all $s_0\ge s_{0,2},$ there exists initial data satisfying Definition \ref{def of initial data} such that $(b(s),q(s))\in \mathcal{S}_K(s)$ for all $s\ge s_0.$
\end{prop}
\begin{pf}
\normalfont
Let us argue by contradiction. If not, define
\begin{equation*}
s_{*}:=\sup \{s \mid s\ge s_0\,\,\, \text{such that}\,\,\, (b(s),q(s))\in \mathcal{S}_K(s)\}.
\end{equation*}
as the exit time, we have for any initial data satisfying Definition \ref{def of initial data}, $s_{*}<+\infty.$ Definie the map \begin{align*}
\Xi\,\colon \,\mathcal{B}_{l-1}(0,1)&\longrightarrow \partial \mathcal{B}_{l-1}(0,1)\\ s_0^{\frac{\eta(1-\delta)}{2}}(\mathcal{V}_2(s_0),\cdots,\mathcal{V}_l(s_0))&\longmapsto s_{*}^{\frac{\eta(1-\delta)}{2}}(\mathcal{V}_2(s_{*}),\cdots,\mathcal{V}_l(s_{*})).
\end{align*}
By \textnormal{(\romannumeral1)} of Propostion \ref{reduction to finite dim and transverse crossing}, $\Xi$ is well defined. By \textnormal{(\romannumeral2)} of Propostion \ref{reduction to finite dim and transverse crossing}, $\Xi$ restricted on ${\partial \mathcal{B}_{l-1}(0,1)}$ is identity map. Then $\Xi$ is a continuous map and identity on the boundry of a ball which is not possible in topology, this concludes the proof.
\end{pf}
Now we finish the proof of Theorem \ref{main thm}.\par
Expression of $\lambda$ in original time variable. Recall by the proof of (\ref{estimate of lambda in s}), we get
\begin{equation*}
-\lambda_t\lambda=c(u_0)\lambda^{\frac{2l-\gamma}{l}}(1+o(1)),
\end{equation*}
or say
\begin{equation*}
\partial_t\left(\lambda^{\frac{\gamma}{l}}\right)=-c(u_0)(1+o(1)).
\end{equation*}
Then integrating from $t$ to $T,$ we see that
\begin{equation*}
\lambda(t)=c(u_0)(1+o(1))(T-t)^{\frac{l}{\gamma}}.
\end{equation*}\par
On the smallness of Sobolev norms of $q.$ By \textnormal{(\romannumeral3)} of Lemma \ref{coercivity-determined esti on q} and Definition \ref{bootstrap assump}, we have
\begin{equation*}
\int |\partial_y^{2m}q|^2\lesssim \mathscr{E}_{2m}\rightarrow 0,\,\,\,\text{as}\,\,\,s\rightarrow \infty, \,\,\,\text{for}\,\,\,\hbar+2\le m\le \Bbbk.
\end{equation*}
This concludes the proof of Theorem \ref{main thm}.
\end{document}
|
math
|
खुशखबरी: एक बार फिर से एक साथ दिखेगी शाहिद कपूर और दीपिका पादुकोण की जोड़ी | संध्या प्रवाकता
होम सिनेमा खुशखबरी: एक बार फिर से एक साथ दिखेगी शाहिद कपूर और दीपिका...
महाराजा रतन सिंह यानी अपने बॉलीवुड के शाहिद कपूर का मोम का पुतला भी अब जल्द ही लंदन के मैडम तुसाद म्यूजियम में लगा नजर आएगा। दीपिका पादुकोण के बाद पद्मावत में उनके को-स्टार शाहिद कपूर भी मैडम तुसाद में दिखाई देंगे। शाहिद ने अपने इंस्टाग्राम पर फोटो शेयर करके खुद इसकी जानकारी दी है। शाहिद ने इस्टाग्राम पर जो फोटो शेयर की है उसमे वह एक नकली आंख को पकड़े दिखाई दे रहे हैं। फोटो को शेयर करते हुए शाहिद ने कैप्शन लिखा है, कीप एन आई आउट. कमिंग सून यानी इंतजार करो, बहुत जल्द। शाहिद की तरह ही दीपिका ने भी अपने इंस्टाग्राम पर ऐसी ही एक पिक्चर पोस्ट की है जिसमें वह नकली आंख पकड़े नजर आ रही हैं। शाहिद कपूर जल्द ही बत्ती गुल मीटर चालू फिल्म में दिखाई देंगे। इस फिल्म को श्री नारायण ने डायरेक्ट किया है। इस फिल्म में शाहिद के साथ श्रद्धा कपूर और यामी गौतम भी दिखाई देंगी। फिल्मों के अलावा खास बात यह है कि शाहिद जल्दी ही दूसरी बार पापा बनने वाले हैं। दीपिका पादुकोण का स्टैच्यू मैडम तुसाद के लंदन और दिल्ली दोनों ब्रांच में लगेगा जबकि शाहिद का स्टैच्यू कहां लगेगा इस बारे में अभी कुछ फाइनल नहीं है। लंदन के मैडम तुसाद म्यूजियम में बॉलीवुड से इस समय अमिताभ बच्चन, शाहरुख खान, ऐश्वर्या राय और सलमान खान जैसे दिग्गजो के स्टैच्यू लगे हुए हैं। अभिनेत्री दीपिका पादुकोण के लिए २०१८ एक बेहतरीन साल साबित हुआ है जिसकी शुरुवात पद्मावत की रिकॉर्डतोड़ कमाई के साथ हुई थी। फ़िल्म पद्मावत में रानी पद्मिनी की भूमिका ने दर्शकों के दिलो में घर कर लिया था परिणामस्वरूप ३०० करोड़ की धमाकेदार कमाई करने वाली यह इस साल की पहली फ़िल्म थी। इसी के साथ, दीपिका बॉलीवुड की एकमात्र अभिनेत्री है जिसने महिला नेतृत्व वाली फिल्म के साथ ३०० करोड़ क्लब में प्रवेश किया था।
प्रेवियस आर्टियलइस फिल्म के सीक्वल में दिखेगी चाचा-भतीजे की जोड़ी, सलमान खान की नहीं होगी एंट्री
नेक्स्ट आर्टिकलियोंग! ४३ साल की उम्र में बिना मेकअप के ऐसे दिखती हैं रवीना टंडन
बिहार : पुलिस बहाली के लिए तड़के दौड़ने वाली लड़की की तेजाब डाल कर हत्या, सहेलियों के बयान से संदेह गहराया
|
hindi
|
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/. */
// Work around https://github.com/rust-lang/rust/issues/62132
#![recursion_limit = "128"]
//! The layout thread. Performs layout on the DOM, builds display lists and sends them to be
//! painted.
#[macro_use]
extern crate crossbeam_channel;
#[macro_use]
extern crate html5ever;
#[macro_use]
extern crate layout;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate log;
#[macro_use]
extern crate profile_traits;
mod dom_wrapper;
use crate::dom_wrapper::drop_style_and_layout_data;
use crate::dom_wrapper::{ServoLayoutDocument, ServoLayoutElement, ServoLayoutNode};
use app_units::Au;
use crossbeam_channel::{unbounded, Receiver, Sender};
use embedder_traits::resources::{self, Resource};
use euclid::{default::Size2D as UntypedSize2D, Point2D, Rect, Scale, Size2D};
use fnv::FnvHashMap;
use fxhash::{FxHashMap, FxHashSet};
use gfx::font;
use gfx::font_cache_thread::FontCacheThread;
use gfx::font_context;
use gfx_traits::{node_id_from_scroll_id, Epoch};
use histogram::Histogram;
use ipc_channel::ipc::{self, IpcReceiver, IpcSender};
use ipc_channel::router::ROUTER;
use layout::animation;
use layout::construct::ConstructionResult;
use layout::context::malloc_size_of_persistent_local_context;
use layout::context::LayoutContext;
use layout::context::RegisteredPainter;
use layout::context::RegisteredPainters;
use layout::display_list::items::{OpaqueNode, WebRenderImageInfo};
use layout::display_list::{IndexableText, ToLayout, WebRenderDisplayListConverter};
use layout::flow::{Flow, GetBaseFlow, ImmutableFlowUtils, MutableOwnedFlowUtils};
use layout::flow_ref::FlowRef;
use layout::incremental::{RelayoutMode, SpecialRestyleDamage};
use layout::layout_debug;
use layout::parallel;
use layout::query::{
process_content_box_request, process_content_boxes_request, LayoutRPCImpl, LayoutThreadData,
};
use layout::query::{process_element_inner_text_query, process_node_geometry_request};
use layout::query::{process_node_scroll_area_request, process_node_scroll_id_request};
use layout::query::{
process_offset_parent_query, process_resolved_style_request, process_style_query,
};
use layout::sequential;
use layout::traversal::{
ComputeStackingRelativePositions, PreorderFlowTraversal, RecalcStyleAndConstructFlows,
};
use layout::wrapper::LayoutNodeLayoutData;
use layout_traits::LayoutThreadFactory;
use libc::c_void;
use malloc_size_of::{MallocSizeOf, MallocSizeOfOps};
use metrics::{PaintTimeMetrics, ProfilerMetadataFactory, ProgressiveWebMetric};
use msg::constellation_msg::{
BackgroundHangMonitor, BackgroundHangMonitorRegister, HangAnnotation,
};
use msg::constellation_msg::{BrowsingContextId, MonitoredComponentId, TopLevelBrowsingContextId};
use msg::constellation_msg::{LayoutHangAnnotation, MonitoredComponentType, PipelineId};
use net_traits::image_cache::{ImageCache, UsePlaceholder};
use parking_lot::RwLock;
use profile_traits::mem::{self as profile_mem, Report, ReportKind, ReportsChan};
use profile_traits::time::{self as profile_time, profile, TimerMetadata};
use profile_traits::time::{TimerMetadataFrameType, TimerMetadataReflowType};
use script_layout_interface::message::{LayoutThreadInit, Msg, NodesFromPointQueryType, Reflow};
use script_layout_interface::message::{QueryMsg, ReflowComplete, ReflowGoal, ScriptReflow};
use script_layout_interface::rpc::TextIndexResponse;
use script_layout_interface::rpc::{LayoutRPC, OffsetParentResponse, StyleResponse};
use script_layout_interface::wrapper_traits::LayoutNode;
use script_traits::Painter;
use script_traits::{ConstellationControlMsg, LayoutControlMsg, LayoutMsg as ConstellationMsg};
use script_traits::{DrawAPaintImageResult, IFrameSizeMsg, PaintWorkletError, WindowSizeType};
use script_traits::{ScrollState, UntrustedNodeAddress};
use selectors::Element;
use servo_arc::Arc as ServoArc;
use servo_atoms::Atom;
use servo_config::opts;
use servo_config::pref;
use servo_geometry::{DeviceIndependentPixel, MaxRect};
use servo_url::ServoUrl;
use std::borrow::ToOwned;
use std::cell::{Cell, RefCell};
use std::collections::HashMap;
use std::ops::{Deref, DerefMut};
use std::process;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex, MutexGuard};
use std::thread;
use std::time::Duration;
use style::animation::Animation;
use style::context::{QuirksMode, RegisteredSpeculativePainter, RegisteredSpeculativePainters};
use style::context::{SharedStyleContext, ThreadLocalStyleContextCreationInfo};
use style::dom::{ShowSubtree, ShowSubtreeDataAndPrimaryValues, TDocument, TElement, TNode};
use style::driver;
use style::error_reporting::RustLogReporter;
use style::global_style_data::{GLOBAL_STYLE_DATA, STYLE_THREAD_POOL};
use style::invalidation::element::restyle_hints::RestyleHint;
use style::logical_geometry::LogicalPoint;
use style::media_queries::{Device, MediaList, MediaType};
use style::properties::PropertyId;
use style::selector_parser::SnapshotMap;
use style::servo::restyle_damage::ServoRestyleDamage;
use style::shared_lock::{SharedRwLock, SharedRwLockReadGuard, StylesheetGuards};
use style::stylesheets::{
DocumentStyleSheet, Origin, Stylesheet, StylesheetInDocument, UserAgentStylesheets,
};
use style::stylist::Stylist;
use style::thread_state::{self, ThreadState};
use style::timer::Timer;
use style::traversal::DomTraversal;
use style::traversal_flags::TraversalFlags;
use style_traits::CSSPixel;
use style_traits::DevicePixel;
use style_traits::SpeculativePainter;
/// Information needed by the layout thread.
pub struct LayoutThread {
/// The ID of the pipeline that we belong to.
id: PipelineId,
/// The ID of the top-level browsing context that we belong to.
top_level_browsing_context_id: TopLevelBrowsingContextId,
/// The URL of the pipeline that we belong to.
url: ServoUrl,
/// Performs CSS selector matching and style resolution.
stylist: Stylist,
/// Is the current reflow of an iframe, as opposed to a root window?
is_iframe: bool,
/// The port on which we receive messages from the script thread.
port: Receiver<Msg>,
/// The port on which we receive messages from the constellation.
pipeline_port: Receiver<LayoutControlMsg>,
/// The port on which we receive messages from the font cache thread.
font_cache_receiver: Receiver<()>,
/// The channel on which the font cache can send messages to us.
font_cache_sender: IpcSender<()>,
/// A means of communication with the background hang monitor.
background_hang_monitor: Box<dyn BackgroundHangMonitor>,
/// The channel on which messages can be sent to the constellation.
constellation_chan: IpcSender<ConstellationMsg>,
/// The channel on which messages can be sent to the script thread.
script_chan: IpcSender<ConstellationControlMsg>,
/// The channel on which messages can be sent to the time profiler.
time_profiler_chan: profile_time::ProfilerChan,
/// The channel on which messages can be sent to the memory profiler.
mem_profiler_chan: profile_mem::ProfilerChan,
/// Reference to the script thread image cache.
image_cache: Arc<dyn ImageCache>,
/// Public interface to the font cache thread.
font_cache_thread: FontCacheThread,
/// Is this the first reflow in this LayoutThread?
first_reflow: Cell<bool>,
/// Flag to indicate whether to use parallel operations
parallel_flag: bool,
/// Starts at zero, and increased by one every time a layout completes.
/// This can be used to easily check for invalid stale data.
generation: Cell<u32>,
/// A channel on which new animations that have been triggered by style recalculation can be
/// sent.
new_animations_sender: Sender<Animation>,
/// Receives newly-discovered animations.
new_animations_receiver: Receiver<Animation>,
/// The number of Web fonts that have been requested but not yet loaded.
outstanding_web_fonts: Arc<AtomicUsize>,
/// The root of the flow tree.
root_flow: RefCell<Option<FlowRef>>,
/// The document-specific shared lock used for author-origin stylesheets
document_shared_lock: Option<SharedRwLock>,
/// The list of currently-running animations.
running_animations: ServoArc<RwLock<FxHashMap<OpaqueNode, Vec<Animation>>>>,
/// The list of animations that have expired since the last style recalculation.
expired_animations: ServoArc<RwLock<FxHashMap<OpaqueNode, Vec<Animation>>>>,
/// A counter for epoch messages
epoch: Cell<Epoch>,
/// The size of the viewport. This may be different from the size of the screen due to viewport
/// constraints.
viewport_size: UntypedSize2D<Au>,
/// A mutex to allow for fast, read-only RPC of layout's internal data
/// structures, while still letting the LayoutThread modify them.
///
/// All the other elements of this struct are read-only.
rw_data: Arc<Mutex<LayoutThreadData>>,
webrender_image_cache: Arc<RwLock<FnvHashMap<(ServoUrl, UsePlaceholder), WebRenderImageInfo>>>,
/// The executors for paint worklets.
registered_painters: RegisteredPaintersImpl,
/// Webrender interface.
webrender_api: webrender_api::RenderApi,
/// Webrender document.
webrender_document: webrender_api::DocumentId,
/// The timer object to control the timing of the animations. This should
/// only be a test-mode timer during testing for animations.
timer: Timer,
/// Paint time metrics.
paint_time_metrics: PaintTimeMetrics,
/// The time a layout query has waited before serviced by layout thread.
layout_query_waiting_time: Histogram,
/// The sizes of all iframes encountered during the last layout operation.
last_iframe_sizes: RefCell<HashMap<BrowsingContextId, Size2D<f32, CSSPixel>>>,
/// Flag that indicates if LayoutThread is busy handling a request.
busy: Arc<AtomicBool>,
/// Load web fonts synchronously to avoid non-deterministic network-driven reflows.
load_webfonts_synchronously: bool,
/// The initial request size of the window
initial_window_size: Size2D<u32, DeviceIndependentPixel>,
/// The ratio of device pixels per px at the default scale.
/// If unspecified, will use the platform default setting.
device_pixels_per_px: Option<f32>,
/// Dumps the display list form after a layout.
dump_display_list: bool,
/// Dumps the display list in JSON form after a layout.
dump_display_list_json: bool,
/// Dumps the DOM after restyle.
dump_style_tree: bool,
/// Dumps the flow tree after a layout.
dump_rule_tree: bool,
/// Emits notifications when there is a relayout.
relayout_event: bool,
/// True to turn off incremental layout.
nonincremental_layout: bool,
/// True if each step of layout is traced to an external JSON file
/// for debugging purposes. Setting this implies sequential layout
/// and paint.
trace_layout: bool,
/// Dumps the flow tree after a layout.
dump_flow_tree: bool,
}
impl LayoutThreadFactory for LayoutThread {
type Message = Msg;
/// Spawns a new layout thread.
fn create(
id: PipelineId,
top_level_browsing_context_id: TopLevelBrowsingContextId,
url: ServoUrl,
is_iframe: bool,
chan: (Sender<Msg>, Receiver<Msg>),
pipeline_port: IpcReceiver<LayoutControlMsg>,
background_hang_monitor_register: Box<dyn BackgroundHangMonitorRegister>,
constellation_chan: IpcSender<ConstellationMsg>,
script_chan: IpcSender<ConstellationControlMsg>,
image_cache: Arc<dyn ImageCache>,
font_cache_thread: FontCacheThread,
time_profiler_chan: profile_time::ProfilerChan,
mem_profiler_chan: profile_mem::ProfilerChan,
webrender_api_sender: webrender_api::RenderApiSender,
webrender_document: webrender_api::DocumentId,
paint_time_metrics: PaintTimeMetrics,
busy: Arc<AtomicBool>,
load_webfonts_synchronously: bool,
initial_window_size: Size2D<u32, DeviceIndependentPixel>,
device_pixels_per_px: Option<f32>,
dump_display_list: bool,
dump_display_list_json: bool,
dump_style_tree: bool,
dump_rule_tree: bool,
relayout_event: bool,
nonincremental_layout: bool,
trace_layout: bool,
dump_flow_tree: bool,
) {
thread::Builder::new()
.name(format!("LayoutThread {:?}", id))
.spawn(move || {
thread_state::initialize(ThreadState::LAYOUT);
// In order to get accurate crash reports, we install the top-level bc id.
TopLevelBrowsingContextId::install(top_level_browsing_context_id);
{
// Ensures layout thread is destroyed before we send shutdown message
let sender = chan.0;
let background_hang_monitor = background_hang_monitor_register
.register_component(
MonitoredComponentId(id, MonitoredComponentType::Layout),
Duration::from_millis(1000),
Duration::from_millis(5000),
);
let layout = LayoutThread::new(
id,
top_level_browsing_context_id,
url,
is_iframe,
chan.1,
pipeline_port,
background_hang_monitor,
constellation_chan,
script_chan,
image_cache.clone(),
font_cache_thread,
time_profiler_chan,
mem_profiler_chan.clone(),
webrender_api_sender,
webrender_document,
paint_time_metrics,
busy,
load_webfonts_synchronously,
initial_window_size,
device_pixels_per_px,
dump_display_list,
dump_display_list_json,
dump_style_tree,
dump_rule_tree,
relayout_event,
nonincremental_layout,
trace_layout,
dump_flow_tree,
);
let reporter_name = format!("layout-reporter-{}", id);
mem_profiler_chan.run_with_memory_reporting(
|| {
layout.start();
},
reporter_name,
sender,
Msg::CollectReports,
);
}
})
.expect("Thread spawning failed");
}
}
struct ScriptReflowResult {
script_reflow: ScriptReflow,
result: RefCell<Option<ReflowComplete>>,
}
impl Deref for ScriptReflowResult {
type Target = ScriptReflow;
fn deref(&self) -> &ScriptReflow {
&self.script_reflow
}
}
impl DerefMut for ScriptReflowResult {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.script_reflow
}
}
impl ScriptReflowResult {
fn new(script_reflow: ScriptReflow) -> ScriptReflowResult {
ScriptReflowResult {
script_reflow: script_reflow,
result: RefCell::new(Some(Default::default())),
}
}
}
impl Drop for ScriptReflowResult {
fn drop(&mut self) {
self.script_reflow
.script_join_chan
.send(self.result.borrow_mut().take().unwrap())
.unwrap();
}
}
/// The `LayoutThread` `rw_data` lock must remain locked until the first reflow,
/// as RPC calls don't make sense until then. Use this in combination with
/// `LayoutThread::lock_rw_data` and `LayoutThread::return_rw_data`.
pub enum RWGuard<'a> {
/// If the lock was previously held, from when the thread started.
Held(MutexGuard<'a, LayoutThreadData>),
/// If the lock was just used, and has been returned since there has been
/// a reflow already.
Used(MutexGuard<'a, LayoutThreadData>),
}
impl<'a> Deref for RWGuard<'a> {
type Target = LayoutThreadData;
fn deref(&self) -> &LayoutThreadData {
match *self {
RWGuard::Held(ref x) => &**x,
RWGuard::Used(ref x) => &**x,
}
}
}
impl<'a> DerefMut for RWGuard<'a> {
fn deref_mut(&mut self) -> &mut LayoutThreadData {
match *self {
RWGuard::Held(ref mut x) => &mut **x,
RWGuard::Used(ref mut x) => &mut **x,
}
}
}
struct RwData<'a, 'b: 'a> {
rw_data: &'b Arc<Mutex<LayoutThreadData>>,
possibly_locked_rw_data: &'a mut Option<MutexGuard<'b, LayoutThreadData>>,
}
impl<'a, 'b: 'a> RwData<'a, 'b> {
/// If no reflow has happened yet, this will just return the lock in
/// `possibly_locked_rw_data`. Otherwise, it will acquire the `rw_data` lock.
///
/// If you do not wish RPCs to remain blocked, just drop the `RWGuard`
/// returned from this function. If you _do_ wish for them to remain blocked,
/// use `block`.
fn lock(&mut self) -> RWGuard<'b> {
match self.possibly_locked_rw_data.take() {
None => RWGuard::Used(self.rw_data.lock().unwrap()),
Some(x) => RWGuard::Held(x),
}
}
}
fn add_font_face_rules(
stylesheet: &Stylesheet,
guard: &SharedRwLockReadGuard,
device: &Device,
font_cache_thread: &FontCacheThread,
font_cache_sender: &IpcSender<()>,
outstanding_web_fonts_counter: &Arc<AtomicUsize>,
load_webfonts_synchronously: bool,
) {
if load_webfonts_synchronously {
let (sender, receiver) = ipc::channel().unwrap();
stylesheet.effective_font_face_rules(&device, guard, |rule| {
if let Some(font_face) = rule.font_face() {
let effective_sources = font_face.effective_sources();
font_cache_thread.add_web_font(
font_face.family().clone(),
effective_sources,
sender.clone(),
);
receiver.recv().unwrap();
}
})
} else {
stylesheet.effective_font_face_rules(&device, guard, |rule| {
if let Some(font_face) = rule.font_face() {
let effective_sources = font_face.effective_sources();
outstanding_web_fonts_counter.fetch_add(1, Ordering::SeqCst);
font_cache_thread.add_web_font(
font_face.family().clone(),
effective_sources,
(*font_cache_sender).clone(),
);
}
})
}
}
impl LayoutThread {
/// Creates a new `LayoutThread` structure.
fn new(
id: PipelineId,
top_level_browsing_context_id: TopLevelBrowsingContextId,
url: ServoUrl,
is_iframe: bool,
port: Receiver<Msg>,
pipeline_port: IpcReceiver<LayoutControlMsg>,
background_hang_monitor: Box<dyn BackgroundHangMonitor>,
constellation_chan: IpcSender<ConstellationMsg>,
script_chan: IpcSender<ConstellationControlMsg>,
image_cache: Arc<dyn ImageCache>,
font_cache_thread: FontCacheThread,
time_profiler_chan: profile_time::ProfilerChan,
mem_profiler_chan: profile_mem::ProfilerChan,
webrender_api_sender: webrender_api::RenderApiSender,
webrender_document: webrender_api::DocumentId,
paint_time_metrics: PaintTimeMetrics,
busy: Arc<AtomicBool>,
load_webfonts_synchronously: bool,
initial_window_size: Size2D<u32, DeviceIndependentPixel>,
device_pixels_per_px: Option<f32>,
dump_display_list: bool,
dump_display_list_json: bool,
dump_style_tree: bool,
dump_rule_tree: bool,
relayout_event: bool,
nonincremental_layout: bool,
trace_layout: bool,
dump_flow_tree: bool,
) -> LayoutThread {
// The device pixel ratio is incorrect (it does not have the hidpi value),
// but it will be set correctly when the initial reflow takes place.
let device = Device::new(
MediaType::screen(),
initial_window_size.to_f32() * Scale::new(1.0),
Scale::new(device_pixels_per_px.unwrap_or(1.0)),
);
// Create the channel on which new animations can be sent.
let (new_animations_sender, new_animations_receiver) = unbounded();
// Proxy IPC messages from the pipeline to the layout thread.
let pipeline_receiver = ROUTER.route_ipc_receiver_to_new_crossbeam_receiver(pipeline_port);
// Ask the router to proxy IPC messages from the font cache thread to the layout thread.
let (ipc_font_cache_sender, ipc_font_cache_receiver) = ipc::channel().unwrap();
let font_cache_receiver =
ROUTER.route_ipc_receiver_to_new_crossbeam_receiver(ipc_font_cache_receiver);
LayoutThread {
id: id,
top_level_browsing_context_id: top_level_browsing_context_id,
url: url,
is_iframe: is_iframe,
port: port,
pipeline_port: pipeline_receiver,
script_chan: script_chan.clone(),
background_hang_monitor,
constellation_chan: constellation_chan.clone(),
time_profiler_chan: time_profiler_chan,
mem_profiler_chan: mem_profiler_chan,
registered_painters: RegisteredPaintersImpl(Default::default()),
image_cache: image_cache.clone(),
font_cache_thread: font_cache_thread,
first_reflow: Cell::new(true),
font_cache_receiver: font_cache_receiver,
font_cache_sender: ipc_font_cache_sender,
parallel_flag: true,
generation: Cell::new(0),
new_animations_sender: new_animations_sender,
new_animations_receiver: new_animations_receiver,
outstanding_web_fonts: Arc::new(AtomicUsize::new(0)),
root_flow: RefCell::new(None),
document_shared_lock: None,
running_animations: ServoArc::new(RwLock::new(Default::default())),
expired_animations: ServoArc::new(RwLock::new(Default::default())),
epoch: Cell::new(Epoch(0)),
viewport_size: Size2D::new(Au(0), Au(0)),
webrender_api: webrender_api_sender.create_api(),
webrender_document,
stylist: Stylist::new(device, QuirksMode::NoQuirks),
rw_data: Arc::new(Mutex::new(LayoutThreadData {
constellation_chan: constellation_chan,
display_list: None,
indexable_text: IndexableText::default(),
content_box_response: None,
content_boxes_response: Vec::new(),
client_rect_response: Rect::zero(),
scroll_id_response: None,
scroll_area_response: Rect::zero(),
resolved_style_response: String::new(),
offset_parent_response: OffsetParentResponse::empty(),
style_response: StyleResponse(None),
scroll_offsets: HashMap::new(),
text_index_response: TextIndexResponse(None),
nodes_from_point_response: vec![],
element_inner_text_response: String::new(),
})),
webrender_image_cache: Arc::new(RwLock::new(FnvHashMap::default())),
timer: if pref!(layout.animations.test.enabled) {
Timer::test_mode()
} else {
Timer::new()
},
paint_time_metrics: paint_time_metrics,
layout_query_waiting_time: Histogram::new(),
last_iframe_sizes: Default::default(),
busy,
load_webfonts_synchronously,
initial_window_size,
device_pixels_per_px,
dump_display_list,
dump_display_list_json,
dump_style_tree,
dump_rule_tree,
relayout_event,
nonincremental_layout,
trace_layout,
dump_flow_tree,
}
}
/// Starts listening on the port.
fn start(mut self) {
let rw_data = self.rw_data.clone();
let mut possibly_locked_rw_data = Some(rw_data.lock().unwrap());
let mut rw_data = RwData {
rw_data: &rw_data,
possibly_locked_rw_data: &mut possibly_locked_rw_data,
};
while self.handle_request(&mut rw_data) {
// Loop indefinitely.
}
}
// Create a layout context for use in building display lists, hit testing, &c.
fn build_layout_context<'a>(
&'a self,
guards: StylesheetGuards<'a>,
script_initiated_layout: bool,
snapshot_map: &'a SnapshotMap,
) -> LayoutContext<'a> {
let thread_local_style_context_creation_data =
ThreadLocalStyleContextCreationInfo::new(self.new_animations_sender.clone());
LayoutContext {
id: self.id,
style_context: SharedStyleContext {
stylist: &self.stylist,
options: GLOBAL_STYLE_DATA.options.clone(),
guards,
visited_styles_enabled: false,
running_animations: self.running_animations.clone(),
expired_animations: self.expired_animations.clone(),
registered_speculative_painters: &self.registered_painters,
local_context_creation_data: Mutex::new(thread_local_style_context_creation_data),
timer: self.timer.clone(),
traversal_flags: TraversalFlags::empty(),
snapshot_map: snapshot_map,
},
image_cache: self.image_cache.clone(),
font_cache_thread: Mutex::new(self.font_cache_thread.clone()),
webrender_image_cache: self.webrender_image_cache.clone(),
pending_images: if script_initiated_layout {
Some(Mutex::new(vec![]))
} else {
None
},
newly_transitioning_nodes: if script_initiated_layout {
Some(Mutex::new(vec![]))
} else {
None
},
registered_painters: &self.registered_painters,
}
}
fn notify_activity_to_hang_monitor(&self, request: &Msg) {
let hang_annotation = match request {
Msg::AddStylesheet(..) => LayoutHangAnnotation::AddStylesheet,
Msg::RemoveStylesheet(..) => LayoutHangAnnotation::RemoveStylesheet,
Msg::SetQuirksMode(..) => LayoutHangAnnotation::SetQuirksMode,
Msg::Reflow(..) => LayoutHangAnnotation::Reflow,
Msg::GetRPC(..) => LayoutHangAnnotation::GetRPC,
Msg::TickAnimations => LayoutHangAnnotation::TickAnimations,
Msg::AdvanceClockMs(..) => LayoutHangAnnotation::AdvanceClockMs,
Msg::ReapStyleAndLayoutData(..) => LayoutHangAnnotation::ReapStyleAndLayoutData,
Msg::CollectReports(..) => LayoutHangAnnotation::CollectReports,
Msg::PrepareToExit(..) => LayoutHangAnnotation::PrepareToExit,
Msg::ExitNow => LayoutHangAnnotation::ExitNow,
Msg::GetCurrentEpoch(..) => LayoutHangAnnotation::GetCurrentEpoch,
Msg::GetWebFontLoadState(..) => LayoutHangAnnotation::GetWebFontLoadState,
Msg::CreateLayoutThread(..) => LayoutHangAnnotation::CreateLayoutThread,
Msg::SetFinalUrl(..) => LayoutHangAnnotation::SetFinalUrl,
Msg::SetScrollStates(..) => LayoutHangAnnotation::SetScrollStates,
Msg::UpdateScrollStateFromScript(..) => {
LayoutHangAnnotation::UpdateScrollStateFromScript
},
Msg::RegisterPaint(..) => LayoutHangAnnotation::RegisterPaint,
Msg::SetNavigationStart(..) => LayoutHangAnnotation::SetNavigationStart,
Msg::GetRunningAnimations(..) => LayoutHangAnnotation::GetRunningAnimations,
};
self.background_hang_monitor
.notify_activity(HangAnnotation::Layout(hang_annotation));
}
/// Receives and dispatches messages from the script and constellation threads
fn handle_request<'a, 'b>(&mut self, possibly_locked_rw_data: &mut RwData<'a, 'b>) -> bool {
enum Request {
FromPipeline(LayoutControlMsg),
FromScript(Msg),
FromFontCache,
}
// Notify the background-hang-monitor we are waiting for an event.
self.background_hang_monitor.notify_wait();
let request = select! {
recv(self.pipeline_port) -> msg => Request::FromPipeline(msg.unwrap()),
recv(self.port) -> msg => Request::FromScript(msg.unwrap()),
recv(self.font_cache_receiver) -> msg => { msg.unwrap(); Request::FromFontCache }
};
self.busy.store(true, Ordering::Relaxed);
let result = match request {
Request::FromPipeline(LayoutControlMsg::SetScrollStates(new_scroll_states)) => self
.handle_request_helper(
Msg::SetScrollStates(new_scroll_states),
possibly_locked_rw_data,
),
Request::FromPipeline(LayoutControlMsg::TickAnimations) => {
self.handle_request_helper(Msg::TickAnimations, possibly_locked_rw_data)
},
Request::FromPipeline(LayoutControlMsg::GetCurrentEpoch(sender)) => {
self.handle_request_helper(Msg::GetCurrentEpoch(sender), possibly_locked_rw_data)
},
Request::FromPipeline(LayoutControlMsg::GetWebFontLoadState(sender)) => self
.handle_request_helper(Msg::GetWebFontLoadState(sender), possibly_locked_rw_data),
Request::FromPipeline(LayoutControlMsg::ExitNow) => {
self.handle_request_helper(Msg::ExitNow, possibly_locked_rw_data)
},
Request::FromPipeline(LayoutControlMsg::PaintMetric(epoch, paint_time)) => {
self.paint_time_metrics.maybe_set_metric(epoch, paint_time);
true
},
Request::FromScript(msg) => self.handle_request_helper(msg, possibly_locked_rw_data),
Request::FromFontCache => {
let _rw_data = possibly_locked_rw_data.lock();
self.outstanding_web_fonts.fetch_sub(1, Ordering::SeqCst);
font_context::invalidate_font_caches();
self.script_chan
.send(ConstellationControlMsg::WebFontLoaded(self.id))
.unwrap();
true
},
};
self.busy.store(false, Ordering::Relaxed);
result
}
/// Receives and dispatches messages from other threads.
fn handle_request_helper<'a, 'b>(
&mut self,
request: Msg,
possibly_locked_rw_data: &mut RwData<'a, 'b>,
) -> bool {
self.notify_activity_to_hang_monitor(&request);
match request {
Msg::AddStylesheet(stylesheet, before_stylesheet) => {
let guard = stylesheet.shared_lock.read();
self.handle_add_stylesheet(&stylesheet, &guard);
match before_stylesheet {
Some(insertion_point) => self.stylist.insert_stylesheet_before(
DocumentStyleSheet(stylesheet.clone()),
DocumentStyleSheet(insertion_point),
&guard,
),
None => self
.stylist
.append_stylesheet(DocumentStyleSheet(stylesheet.clone()), &guard),
}
},
Msg::RemoveStylesheet(stylesheet) => {
let guard = stylesheet.shared_lock.read();
self.stylist
.remove_stylesheet(DocumentStyleSheet(stylesheet.clone()), &guard);
},
Msg::SetQuirksMode(mode) => self.handle_set_quirks_mode(mode),
Msg::GetRPC(response_chan) => {
response_chan
.send(Box::new(LayoutRPCImpl(self.rw_data.clone())) as Box<dyn LayoutRPC + Send>)
.unwrap();
},
Msg::Reflow(data) => {
let mut data = ScriptReflowResult::new(data);
profile(
profile_time::ProfilerCategory::LayoutPerform,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| self.handle_reflow(&mut data, possibly_locked_rw_data),
);
},
Msg::TickAnimations => self.tick_all_animations(possibly_locked_rw_data),
Msg::SetScrollStates(new_scroll_states) => {
self.set_scroll_states(new_scroll_states, possibly_locked_rw_data);
},
Msg::UpdateScrollStateFromScript(state) => {
let mut rw_data = possibly_locked_rw_data.lock();
rw_data
.scroll_offsets
.insert(state.scroll_id, state.scroll_offset);
let point = Point2D::new(-state.scroll_offset.x, -state.scroll_offset.y);
let mut txn = webrender_api::Transaction::new();
txn.scroll_node_with_id(
webrender_api::units::LayoutPoint::from_untyped(point),
state.scroll_id,
webrender_api::ScrollClamping::ToContentBounds,
);
self.webrender_api
.send_transaction(self.webrender_document, txn);
},
Msg::ReapStyleAndLayoutData(dead_data) => unsafe {
drop_style_and_layout_data(dead_data)
},
Msg::CollectReports(reports_chan) => {
self.collect_reports(reports_chan, possibly_locked_rw_data);
},
Msg::GetCurrentEpoch(sender) => {
let _rw_data = possibly_locked_rw_data.lock();
sender.send(self.epoch.get()).unwrap();
},
Msg::AdvanceClockMs(how_many, do_tick) => {
self.handle_advance_clock_ms(how_many, possibly_locked_rw_data, do_tick);
},
Msg::GetWebFontLoadState(sender) => {
let _rw_data = possibly_locked_rw_data.lock();
let outstanding_web_fonts = self.outstanding_web_fonts.load(Ordering::SeqCst);
sender.send(outstanding_web_fonts != 0).unwrap();
},
Msg::CreateLayoutThread(info) => self.create_layout_thread(info),
Msg::SetFinalUrl(final_url) => {
self.url = final_url;
},
Msg::RegisterPaint(name, mut properties, painter) => {
debug!("Registering the painter");
let properties = properties
.drain(..)
.filter_map(|name| {
let id = PropertyId::parse_enabled_for_all_content(&*name).ok()?;
Some((name.clone(), id))
})
.filter(|&(_, ref id)| !id.is_shorthand())
.collect();
let registered_painter = RegisteredPainterImpl {
name: name.clone(),
properties,
painter,
};
self.registered_painters.0.insert(name, registered_painter);
},
Msg::PrepareToExit(response_chan) => {
self.prepare_to_exit(response_chan);
return false;
},
// Receiving the Exit message at this stage only happens when layout is undergoing a "force exit".
Msg::ExitNow => {
debug!("layout: ExitNow received");
self.exit_now();
return false;
},
Msg::SetNavigationStart(time) => {
self.paint_time_metrics.set_navigation_start(time);
},
Msg::GetRunningAnimations(sender) => {
let _ = sender.send(self.running_animations.read().len());
},
}
true
}
fn collect_reports<'a, 'b>(
&self,
reports_chan: ReportsChan,
possibly_locked_rw_data: &mut RwData<'a, 'b>,
) {
let mut reports = vec![];
// Servo uses vanilla jemalloc, which doesn't have a
// malloc_enclosing_size_of function.
let mut ops = MallocSizeOfOps::new(servo_allocator::usable_size, None, None);
// FIXME(njn): Just measuring the display tree for now.
let rw_data = possibly_locked_rw_data.lock();
let display_list = rw_data.display_list.as_ref();
let formatted_url = &format!("url({})", self.url);
reports.push(Report {
path: path![formatted_url, "layout-thread", "display-list"],
kind: ReportKind::ExplicitJemallocHeapSize,
size: display_list.map_or(0, |sc| sc.size_of(&mut ops)),
});
reports.push(Report {
path: path![formatted_url, "layout-thread", "stylist"],
kind: ReportKind::ExplicitJemallocHeapSize,
size: self.stylist.size_of(&mut ops),
});
// The LayoutThread has data in Persistent TLS...
reports.push(Report {
path: path![formatted_url, "layout-thread", "local-context"],
kind: ReportKind::ExplicitJemallocHeapSize,
size: malloc_size_of_persistent_local_context(&mut ops),
});
reports_chan.send(reports);
}
fn create_layout_thread(&self, info: LayoutThreadInit) {
LayoutThread::create(
info.id,
self.top_level_browsing_context_id,
info.url.clone(),
info.is_parent,
info.layout_pair,
info.pipeline_port,
info.background_hang_monitor_register,
info.constellation_chan,
info.script_chan.clone(),
info.image_cache.clone(),
self.font_cache_thread.clone(),
self.time_profiler_chan.clone(),
self.mem_profiler_chan.clone(),
self.webrender_api.clone_sender(),
self.webrender_document,
info.paint_time_metrics,
info.layout_is_busy,
self.load_webfonts_synchronously,
self.initial_window_size,
self.device_pixels_per_px,
self.dump_display_list,
self.dump_display_list_json,
self.dump_style_tree,
self.dump_rule_tree,
self.relayout_event,
self.nonincremental_layout,
self.trace_layout,
self.dump_flow_tree,
);
}
/// Enters a quiescent state in which no new messages will be processed until an `ExitNow` is
/// received. A pong is immediately sent on the given response channel.
fn prepare_to_exit(&mut self, response_chan: Sender<()>) {
response_chan.send(()).unwrap();
loop {
match self.port.recv().unwrap() {
Msg::ReapStyleAndLayoutData(dead_data) => unsafe {
drop_style_and_layout_data(dead_data)
},
Msg::ExitNow => {
debug!("layout thread is exiting...");
self.exit_now();
break;
},
Msg::CollectReports(_) => {
// Just ignore these messages at this point.
},
_ => panic!("layout: unexpected message received after `PrepareToExitMsg`"),
}
}
}
/// Shuts down the layout thread now. If there are any DOM nodes left, layout will now (safely)
/// crash.
fn exit_now(&mut self) {
// Drop the root flow explicitly to avoid holding style data, such as
// rule nodes. The `Stylist` checks when it is dropped that all rule
// nodes have been GCed, so we want drop anyone who holds them first.
let waiting_time_min = self.layout_query_waiting_time.minimum().unwrap_or(0);
let waiting_time_max = self.layout_query_waiting_time.maximum().unwrap_or(0);
let waiting_time_mean = self.layout_query_waiting_time.mean().unwrap_or(0);
let waiting_time_stddev = self.layout_query_waiting_time.stddev().unwrap_or(0);
debug!(
"layout: query waiting time: min: {}, max: {}, mean: {}, standard_deviation: {}",
waiting_time_min, waiting_time_max, waiting_time_mean, waiting_time_stddev
);
self.root_flow.borrow_mut().take();
self.background_hang_monitor.unregister();
}
fn handle_add_stylesheet(&self, stylesheet: &Stylesheet, guard: &SharedRwLockReadGuard) {
// Find all font-face rules and notify the font cache of them.
// GWTODO: Need to handle unloading web fonts.
if stylesheet.is_effective_for_device(self.stylist.device(), &guard) {
add_font_face_rules(
&*stylesheet,
&guard,
self.stylist.device(),
&self.font_cache_thread,
&self.font_cache_sender,
&self.outstanding_web_fonts,
self.load_webfonts_synchronously,
);
}
}
/// Advances the animation clock of the document.
fn handle_advance_clock_ms<'a, 'b>(
&mut self,
how_many_ms: i32,
possibly_locked_rw_data: &mut RwData<'a, 'b>,
tick_animations: bool,
) {
self.timer.increment(how_many_ms as f64 / 1000.0);
if tick_animations {
self.tick_all_animations(possibly_locked_rw_data);
}
}
/// Sets quirks mode for the document, causing the quirks mode stylesheet to be used.
fn handle_set_quirks_mode<'a, 'b>(&mut self, quirks_mode: QuirksMode) {
self.stylist.set_quirks_mode(quirks_mode);
}
fn try_get_layout_root<N: LayoutNode>(&self, node: N) -> Option<FlowRef> {
let result = node.mutate_layout_data()?.flow_construction_result.get();
let mut flow = match result {
ConstructionResult::Flow(mut flow, abs_descendants) => {
// Note: Assuming that the root has display 'static' (as per
// CSS Section 9.3.1). Otherwise, if it were absolutely
// positioned, it would return a reference to itself in
// `abs_descendants` and would lead to a circular reference.
// Set Root as CB for any remaining absolute descendants.
flow.set_absolute_descendants(abs_descendants);
flow
},
_ => return None,
};
FlowRef::deref_mut(&mut flow).mark_as_root();
Some(flow)
}
/// Performs layout constraint solving.
///
/// This corresponds to `Reflow()` in Gecko and `layout()` in WebKit/Blink and should be
/// benchmarked against those two. It is marked `#[inline(never)]` to aid profiling.
#[inline(never)]
fn solve_constraints(layout_root: &mut dyn Flow, layout_context: &LayoutContext) {
let _scope = layout_debug_scope!("solve_constraints");
sequential::reflow(layout_root, layout_context, RelayoutMode::Incremental);
}
/// Performs layout constraint solving in parallel.
///
/// This corresponds to `Reflow()` in Gecko and `layout()` in WebKit/Blink and should be
/// benchmarked against those two. It is marked `#[inline(never)]` to aid profiling.
#[inline(never)]
fn solve_constraints_parallel(
traversal: &rayon::ThreadPool,
layout_root: &mut dyn Flow,
profiler_metadata: Option<TimerMetadata>,
time_profiler_chan: profile_time::ProfilerChan,
layout_context: &LayoutContext,
) {
let _scope = layout_debug_scope!("solve_constraints_parallel");
// NOTE: this currently computes borders, so any pruning should separate that
// operation out.
parallel::reflow(
layout_root,
profiler_metadata,
time_profiler_chan,
layout_context,
traversal,
);
}
/// Computes the stacking-relative positions of all flows and, if the painting is dirty and the
/// reflow type need it, builds the display list.
fn compute_abs_pos_and_build_display_list(
&self,
data: &Reflow,
reflow_goal: &ReflowGoal,
document: Option<&ServoLayoutDocument>,
layout_root: &mut dyn Flow,
layout_context: &mut LayoutContext,
rw_data: &mut LayoutThreadData,
) {
let writing_mode = layout_root.base().writing_mode;
let (metadata, sender) = (self.profiler_metadata(), self.time_profiler_chan.clone());
profile(
profile_time::ProfilerCategory::LayoutDispListBuild,
metadata.clone(),
sender.clone(),
|| {
layout_root.mut_base().stacking_relative_position =
LogicalPoint::zero(writing_mode)
.to_physical(writing_mode, self.viewport_size)
.to_vector();
layout_root.mut_base().clip = data.page_clip_rect;
let traversal = ComputeStackingRelativePositions {
layout_context: layout_context,
};
traversal.traverse(layout_root);
if layout_root
.base()
.restyle_damage
.contains(ServoRestyleDamage::REPAINT) ||
rw_data.display_list.is_none()
{
if reflow_goal.needs_display_list() {
let background_color = get_root_flow_background_color(layout_root);
let mut build_state = sequential::build_display_list_for_subtree(
layout_root,
layout_context,
background_color,
data.page_clip_rect.size,
);
debug!("Done building display list.");
let root_size = {
let root_flow = layout_root.base();
if self.stylist.viewport_constraints().is_some() {
root_flow.position.size.to_physical(root_flow.writing_mode)
} else {
root_flow.overflow.scroll.size
}
};
let origin = Rect::new(Point2D::new(Au(0), Au(0)), root_size).to_layout();
build_state.root_stacking_context.bounds = origin;
build_state.root_stacking_context.overflow = origin;
if !build_state.iframe_sizes.is_empty() {
// build_state.iframe_sizes is only used here, so its okay to replace
// it with an empty vector
let iframe_sizes =
std::mem::replace(&mut build_state.iframe_sizes, vec![]);
// Collect the last frame's iframe sizes to compute any differences.
// Every frame starts with a fresh collection so that any removed
// iframes do not linger.
let last_iframe_sizes = std::mem::replace(
&mut *self.last_iframe_sizes.borrow_mut(),
HashMap::default(),
);
let mut size_messages = vec![];
for new_size in iframe_sizes {
// Only notify the constellation about existing iframes
// that have a new size, or iframes that did not previously
// exist.
if let Some(old_size) = last_iframe_sizes.get(&new_size.id) {
if *old_size != new_size.size {
size_messages.push(IFrameSizeMsg {
data: new_size,
type_: WindowSizeType::Resize,
});
}
} else {
size_messages.push(IFrameSizeMsg {
data: new_size,
type_: WindowSizeType::Initial,
});
}
self.last_iframe_sizes
.borrow_mut()
.insert(new_size.id, new_size.size);
}
if !size_messages.is_empty() {
let msg = ConstellationMsg::IFrameSizes(size_messages);
if let Err(e) = self.constellation_chan.send(msg) {
warn!("Layout resize to constellation failed ({}).", e);
}
}
}
rw_data.indexable_text = std::mem::replace(
&mut build_state.indexable_text,
IndexableText::default(),
);
rw_data.display_list = Some(build_state.to_display_list());
}
}
if !reflow_goal.needs_display() {
// Defer the paint step until the next ForDisplay.
//
// We need to tell the document about this so it doesn't
// incorrectly suppress reflows. See #13131.
document
.expect("No document in a non-display reflow?")
.needs_paint_from_layout();
return;
}
if let Some(document) = document {
document.will_paint();
}
let display_list = rw_data.display_list.as_mut().unwrap();
if self.dump_display_list {
display_list.print();
}
if self.dump_display_list_json {
println!("{}", serde_json::to_string_pretty(&display_list).unwrap());
}
debug!("Layout done!");
// TODO: Avoid the temporary conversion and build webrender sc/dl directly!
let builder = display_list.convert_to_webrender(self.id);
let viewport_size = Size2D::new(
self.viewport_size.width.to_f32_px(),
self.viewport_size.height.to_f32_px(),
);
let mut epoch = self.epoch.get();
epoch.next();
self.epoch.set(epoch);
let viewport_size = webrender_api::units::LayoutSize::from_untyped(viewport_size);
// Observe notifications about rendered frames if needed right before
// sending the display list to WebRender in order to set time related
// Progressive Web Metrics.
self.paint_time_metrics
.maybe_observe_paint_time(self, epoch, &*display_list);
let mut txn = webrender_api::Transaction::new();
txn.set_display_list(
webrender_api::Epoch(epoch.0),
None,
viewport_size,
builder.finalize(),
true,
);
txn.generate_frame();
self.webrender_api
.send_transaction(self.webrender_document, txn);
},
);
}
/// The high-level routine that performs layout threads.
fn handle_reflow<'a, 'b>(
&mut self,
data: &mut ScriptReflowResult,
possibly_locked_rw_data: &mut RwData<'a, 'b>,
) {
let document = unsafe { ServoLayoutNode::new(&data.document) };
let document = document.as_document().unwrap();
// Parallelize if there's more than 750 objects based on rzambre's suggestion
// https://github.com/servo/servo/issues/10110
self.parallel_flag = data.dom_count > 750;
debug!("layout: received layout request for: {}", self.url);
debug!("Number of objects in DOM: {}", data.dom_count);
debug!("layout: parallel? {}", self.parallel_flag);
let mut rw_data = possibly_locked_rw_data.lock();
// Record the time that layout query has been waited.
let now = time::precise_time_ns();
if let ReflowGoal::LayoutQuery(_, timestamp) = data.reflow_goal {
self.layout_query_waiting_time
.increment(now - timestamp)
.expect("layout: wrong layout query timestamp");
};
let element = match document.root_element() {
None => {
// Since we cannot compute anything, give spec-required placeholders.
debug!("layout: No root node: bailing");
match data.reflow_goal {
ReflowGoal::LayoutQuery(ref query_msg, _) => match query_msg {
&QueryMsg::ContentBoxQuery(_) => {
rw_data.content_box_response = None;
},
&QueryMsg::ContentBoxesQuery(_) => {
rw_data.content_boxes_response = Vec::new();
},
&QueryMsg::NodesFromPointQuery(..) => {
rw_data.nodes_from_point_response = Vec::new();
},
&QueryMsg::NodeGeometryQuery(_) => {
rw_data.client_rect_response = Rect::zero();
},
&QueryMsg::NodeScrollGeometryQuery(_) => {
rw_data.scroll_area_response = Rect::zero();
},
&QueryMsg::NodeScrollIdQuery(_) => {
rw_data.scroll_id_response = None;
},
&QueryMsg::ResolvedStyleQuery(_, _, _) => {
rw_data.resolved_style_response = String::new();
},
&QueryMsg::OffsetParentQuery(_) => {
rw_data.offset_parent_response = OffsetParentResponse::empty();
},
&QueryMsg::StyleQuery(_) => {
rw_data.style_response = StyleResponse(None);
},
&QueryMsg::TextIndexQuery(..) => {
rw_data.text_index_response = TextIndexResponse(None);
},
&QueryMsg::ElementInnerTextQuery(_) => {
rw_data.element_inner_text_response = String::new();
},
},
ReflowGoal::Full | ReflowGoal::TickAnimations => {},
}
return;
},
Some(x) => x,
};
debug!(
"layout: processing reflow request for: {:?} ({}) (query={:?})",
element, self.url, data.reflow_goal
);
trace!("{:?}", ShowSubtree(element.as_node()));
let initial_viewport = data.window_size.initial_viewport;
let device_pixel_ratio = data.window_size.device_pixel_ratio;
let old_viewport_size = self.viewport_size;
let current_screen_size = Size2D::new(
Au::from_f32_px(initial_viewport.width),
Au::from_f32_px(initial_viewport.height),
);
// Calculate the actual viewport as per DEVICE-ADAPT § 6
// If the entire flow tree is invalid, then it will be reflowed anyhow.
let document_shared_lock = document.style_shared_lock();
self.document_shared_lock = Some(document_shared_lock.clone());
let author_guard = document_shared_lock.read();
let ua_stylesheets = &*UA_STYLESHEETS;
let ua_or_user_guard = ua_stylesheets.shared_lock.read();
let guards = StylesheetGuards {
author: &author_guard,
ua_or_user: &ua_or_user_guard,
};
let had_used_viewport_units = self.stylist.device().used_viewport_units();
let device = Device::new(MediaType::screen(), initial_viewport, device_pixel_ratio);
let sheet_origins_affected_by_device_change = self.stylist.set_device(device, &guards);
self.stylist
.force_stylesheet_origins_dirty(sheet_origins_affected_by_device_change);
self.viewport_size =
self.stylist
.viewport_constraints()
.map_or(current_screen_size, |constraints| {
debug!("Viewport constraints: {:?}", constraints);
// other rules are evaluated against the actual viewport
Size2D::new(
Au::from_f32_px(constraints.size.width),
Au::from_f32_px(constraints.size.height),
)
});
let viewport_size_changed = self.viewport_size != old_viewport_size;
if viewport_size_changed {
if let Some(constraints) = self.stylist.viewport_constraints() {
// let the constellation know about the viewport constraints
rw_data
.constellation_chan
.send(ConstellationMsg::ViewportConstrained(
self.id,
constraints.clone(),
))
.unwrap();
}
if had_used_viewport_units {
if let Some(mut data) = element.mutate_data() {
data.hint.insert(RestyleHint::recascade_subtree());
}
}
}
{
if self.first_reflow.get() {
debug!("First reflow, rebuilding user and UA rules");
for stylesheet in &ua_stylesheets.user_or_user_agent_stylesheets {
self.stylist
.append_stylesheet(stylesheet.clone(), &ua_or_user_guard);
self.handle_add_stylesheet(&stylesheet.0, &ua_or_user_guard);
}
if self.stylist.quirks_mode() != QuirksMode::NoQuirks {
self.stylist.append_stylesheet(
ua_stylesheets.quirks_mode_stylesheet.clone(),
&ua_or_user_guard,
);
self.handle_add_stylesheet(
&ua_stylesheets.quirks_mode_stylesheet.0,
&ua_or_user_guard,
);
}
}
if data.stylesheets_changed {
debug!("Doc sheets changed, flushing author sheets too");
self.stylist
.force_stylesheet_origins_dirty(Origin::Author.into());
}
}
if viewport_size_changed {
if let Some(mut flow) = self.try_get_layout_root(element.as_node()) {
LayoutThread::reflow_all_nodes(FlowRef::deref_mut(&mut flow));
}
}
debug!(
"Shadow roots in document {:?}",
document.shadow_roots().len()
);
// Flush shadow roots stylesheets if dirty.
document.flush_shadow_roots_stylesheets(
&self.stylist.device(),
document.quirks_mode(),
guards.author.clone(),
);
let restyles = document.drain_pending_restyles();
debug!("Draining restyles: {}", restyles.len());
let mut map = SnapshotMap::new();
let elements_with_snapshot: Vec<_> = restyles
.iter()
.filter(|r| r.1.snapshot.is_some())
.map(|r| r.0)
.collect();
for (el, restyle) in restyles {
// Propagate the descendant bit up the ancestors. Do this before
// the restyle calculation so that we can also do it for new
// unstyled nodes, which the descendants bit helps us find.
if let Some(parent) = el.parent_element() {
unsafe { parent.note_dirty_descendant() };
}
// If we haven't styled this node yet, we don't need to track a
// restyle.
let style_data = match el.get_data() {
Some(d) => d,
None => {
unsafe { el.unset_snapshot_flags() };
continue;
},
};
if let Some(s) = restyle.snapshot {
unsafe { el.set_has_snapshot() };
map.insert(el.as_node().opaque(), s);
}
let mut style_data = style_data.borrow_mut();
// Stash the data on the element for processing by the style system.
style_data.hint.insert(restyle.hint.into());
style_data.damage = restyle.damage;
debug!("Noting restyle for {:?}: {:?}", el, style_data);
}
self.stylist.flush(&guards, Some(element), Some(&map));
// Create a layout context for use throughout the following passes.
let mut layout_context = self.build_layout_context(guards.clone(), true, &map);
let (thread_pool, num_threads) = if self.parallel_flag {
(
STYLE_THREAD_POOL.style_thread_pool.as_ref(),
STYLE_THREAD_POOL.num_threads,
)
} else {
(None, 1)
};
let traversal = RecalcStyleAndConstructFlows::new(layout_context);
let token = {
let shared =
<RecalcStyleAndConstructFlows as DomTraversal<ServoLayoutElement>>::shared_context(
&traversal,
);
RecalcStyleAndConstructFlows::pre_traverse(element, shared)
};
if token.should_traverse() {
// Recalculate CSS styles and rebuild flows and fragments.
profile(
profile_time::ProfilerCategory::LayoutStyleRecalc,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| {
// Perform CSS selector matching and flow construction.
driver::traverse_dom::<ServoLayoutElement, RecalcStyleAndConstructFlows>(
&traversal,
token,
thread_pool,
);
},
);
// TODO(pcwalton): Measure energy usage of text shaping, perhaps?
let text_shaping_time =
font::get_and_reset_text_shaping_performance_counter() / num_threads;
profile_time::send_profile_data(
profile_time::ProfilerCategory::LayoutTextShaping,
self.profiler_metadata(),
&self.time_profiler_chan,
0,
text_shaping_time as u64,
0,
0,
);
// Retrieve the (possibly rebuilt) root flow.
*self.root_flow.borrow_mut() = self.try_get_layout_root(element.as_node());
}
for element in elements_with_snapshot {
unsafe { element.unset_snapshot_flags() }
}
layout_context = traversal.destroy();
if self.dump_style_tree {
println!("{:?}", ShowSubtreeDataAndPrimaryValues(element.as_node()));
}
if self.dump_rule_tree {
layout_context
.style_context
.stylist
.rule_tree()
.dump_stdout(&guards);
}
// GC the rule tree if some heuristics are met.
unsafe {
layout_context.style_context.stylist.rule_tree().maybe_gc();
}
// Perform post-style recalculation layout passes.
if let Some(mut root_flow) = self.root_flow.borrow().clone() {
self.perform_post_style_recalc_layout_passes(
&mut root_flow,
&data.reflow_info,
&data.reflow_goal,
Some(&document),
&mut rw_data,
&mut layout_context,
FxHashSet::default(),
);
}
self.first_reflow.set(false);
self.respond_to_query_if_necessary(
&data.reflow_goal,
&mut *rw_data,
&mut layout_context,
data.result.borrow_mut().as_mut().unwrap(),
);
}
fn respond_to_query_if_necessary(
&self,
reflow_goal: &ReflowGoal,
rw_data: &mut LayoutThreadData,
context: &mut LayoutContext,
reflow_result: &mut ReflowComplete,
) {
let pending_images = match context.pending_images {
Some(ref pending) => std::mem::replace(&mut *pending.lock().unwrap(), vec![]),
None => vec![],
};
reflow_result.pending_images = pending_images;
let newly_transitioning_nodes = match context.newly_transitioning_nodes {
Some(ref nodes) => std::mem::replace(&mut *nodes.lock().unwrap(), vec![]),
None => vec![],
};
reflow_result.newly_transitioning_nodes = newly_transitioning_nodes;
let mut root_flow = match self.root_flow.borrow().clone() {
Some(root_flow) => root_flow,
None => return,
};
let root_flow = FlowRef::deref_mut(&mut root_flow);
match *reflow_goal {
ReflowGoal::LayoutQuery(ref querymsg, _) => match querymsg {
&QueryMsg::ContentBoxQuery(node) => {
rw_data.content_box_response = process_content_box_request(node, root_flow);
},
&QueryMsg::ContentBoxesQuery(node) => {
rw_data.content_boxes_response = process_content_boxes_request(node, root_flow);
},
&QueryMsg::TextIndexQuery(node, point_in_node) => {
let point_in_node = Point2D::new(
Au::from_f32_px(point_in_node.x),
Au::from_f32_px(point_in_node.y),
);
rw_data.text_index_response =
TextIndexResponse(rw_data.indexable_text.text_index(node, point_in_node));
},
&QueryMsg::NodeGeometryQuery(node) => {
rw_data.client_rect_response = process_node_geometry_request(node, root_flow);
},
&QueryMsg::NodeScrollGeometryQuery(node) => {
rw_data.scroll_area_response =
process_node_scroll_area_request(node, root_flow);
},
&QueryMsg::NodeScrollIdQuery(node) => {
let node = unsafe { ServoLayoutNode::new(&node) };
rw_data.scroll_id_response =
Some(process_node_scroll_id_request(self.id, node));
},
&QueryMsg::ResolvedStyleQuery(node, ref pseudo, ref property) => {
let node = unsafe { ServoLayoutNode::new(&node) };
rw_data.resolved_style_response =
process_resolved_style_request(context, node, pseudo, property, root_flow);
},
&QueryMsg::OffsetParentQuery(node) => {
rw_data.offset_parent_response = process_offset_parent_query(node, root_flow);
},
&QueryMsg::StyleQuery(node) => {
let node = unsafe { ServoLayoutNode::new(&node) };
rw_data.style_response = process_style_query(node);
},
&QueryMsg::NodesFromPointQuery(client_point, ref reflow_goal) => {
let mut flags = match reflow_goal {
&NodesFromPointQueryType::Topmost => webrender_api::HitTestFlags::empty(),
&NodesFromPointQueryType::All => webrender_api::HitTestFlags::FIND_ALL,
};
// The point we get is not relative to the entire WebRender scene, but to this
// particular pipeline, so we need to tell WebRender about that.
flags.insert(webrender_api::HitTestFlags::POINT_RELATIVE_TO_PIPELINE_VIEWPORT);
let client_point = webrender_api::units::WorldPoint::from_untyped(client_point);
let results = self.webrender_api.hit_test(
self.webrender_document,
Some(self.id.to_webrender()),
client_point,
flags,
);
rw_data.nodes_from_point_response = results
.items
.iter()
.map(|item| UntrustedNodeAddress(item.tag.0 as *const c_void))
.collect()
},
&QueryMsg::ElementInnerTextQuery(node) => {
let node = unsafe { ServoLayoutNode::new(&node) };
rw_data.element_inner_text_response =
process_element_inner_text_query(node, &rw_data.indexable_text);
},
},
ReflowGoal::Full | ReflowGoal::TickAnimations => {},
}
}
fn set_scroll_states<'a, 'b>(
&mut self,
new_scroll_states: Vec<ScrollState>,
possibly_locked_rw_data: &mut RwData<'a, 'b>,
) {
let mut rw_data = possibly_locked_rw_data.lock();
let mut script_scroll_states = vec![];
let mut layout_scroll_states = HashMap::new();
for new_state in &new_scroll_states {
let offset = new_state.scroll_offset;
layout_scroll_states.insert(new_state.scroll_id, offset);
if new_state.scroll_id.is_root() {
script_scroll_states.push((UntrustedNodeAddress::from_id(0), offset))
} else if let Some(node_id) = node_id_from_scroll_id(new_state.scroll_id.0 as usize) {
script_scroll_states.push((UntrustedNodeAddress::from_id(node_id), offset))
}
}
let _ = self
.script_chan
.send(ConstellationControlMsg::SetScrollState(
self.id,
script_scroll_states,
));
rw_data.scroll_offsets = layout_scroll_states
}
fn tick_all_animations<'a, 'b>(&mut self, possibly_locked_rw_data: &mut RwData<'a, 'b>) {
let mut rw_data = possibly_locked_rw_data.lock();
self.tick_animations(&mut rw_data);
}
fn tick_animations(&mut self, rw_data: &mut LayoutThreadData) {
if self.relayout_event {
println!(
"**** pipeline={}\tForDisplay\tSpecial\tAnimationTick",
self.id
);
}
if let Some(mut root_flow) = self.root_flow.borrow().clone() {
let reflow_info = Reflow {
page_clip_rect: Rect::max_rect(),
};
// Unwrap here should not panic since self.root_flow is only ever set to Some(_)
// in handle_reflow() where self.document_shared_lock is as well.
let author_shared_lock = self.document_shared_lock.clone().unwrap();
let author_guard = author_shared_lock.read();
let ua_or_user_guard = UA_STYLESHEETS.shared_lock.read();
let guards = StylesheetGuards {
author: &author_guard,
ua_or_user: &ua_or_user_guard,
};
let snapshots = SnapshotMap::new();
let mut layout_context = self.build_layout_context(guards, false, &snapshots);
let invalid_nodes = {
// Perform an abbreviated style recalc that operates without access to the DOM.
let animations = self.running_animations.read();
profile(
profile_time::ProfilerCategory::LayoutStyleRecalc,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| {
animation::recalc_style_for_animations::<ServoLayoutElement>(
&layout_context,
FlowRef::deref_mut(&mut root_flow),
&animations,
)
},
)
};
self.perform_post_style_recalc_layout_passes(
&mut root_flow,
&reflow_info,
&ReflowGoal::TickAnimations,
None,
&mut *rw_data,
&mut layout_context,
invalid_nodes,
);
assert!(layout_context.pending_images.is_none());
assert!(layout_context.newly_transitioning_nodes.is_none());
}
}
fn perform_post_style_recalc_layout_passes(
&self,
root_flow: &mut FlowRef,
data: &Reflow,
reflow_goal: &ReflowGoal,
document: Option<&ServoLayoutDocument>,
rw_data: &mut LayoutThreadData,
context: &mut LayoutContext,
invalid_nodes: FxHashSet<OpaqueNode>,
) {
{
let mut newly_transitioning_nodes = context
.newly_transitioning_nodes
.as_ref()
.map(|nodes| nodes.lock().unwrap());
let newly_transitioning_nodes =
newly_transitioning_nodes.as_mut().map(|nodes| &mut **nodes);
// Kick off animations if any were triggered, expire completed ones.
animation::update_animation_state::<ServoLayoutElement>(
&self.constellation_chan,
&self.script_chan,
&mut *self.running_animations.write(),
&mut *self.expired_animations.write(),
invalid_nodes,
newly_transitioning_nodes,
&self.new_animations_receiver,
self.id,
&self.timer,
);
}
profile(
profile_time::ProfilerCategory::LayoutRestyleDamagePropagation,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| {
// Call `compute_layout_damage` even in non-incremental mode, because it sets flags
// that are needed in both incremental and non-incremental traversals.
let damage = FlowRef::deref_mut(root_flow).compute_layout_damage();
if self.nonincremental_layout ||
damage.contains(SpecialRestyleDamage::REFLOW_ENTIRE_DOCUMENT)
{
FlowRef::deref_mut(root_flow).reflow_entire_document()
}
},
);
if self.trace_layout {
layout_debug::begin_trace(root_flow.clone());
}
// Resolve generated content.
profile(
profile_time::ProfilerCategory::LayoutGeneratedContent,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| sequential::resolve_generated_content(FlowRef::deref_mut(root_flow), &context),
);
// Guess float placement.
profile(
profile_time::ProfilerCategory::LayoutFloatPlacementSpeculation,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| sequential::guess_float_placement(FlowRef::deref_mut(root_flow)),
);
// Perform the primary layout passes over the flow tree to compute the locations of all
// the boxes.
if root_flow
.base()
.restyle_damage
.intersects(ServoRestyleDamage::REFLOW | ServoRestyleDamage::REFLOW_OUT_OF_FLOW)
{
profile(
profile_time::ProfilerCategory::LayoutMain,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| {
let profiler_metadata = self.profiler_metadata();
let thread_pool = if self.parallel_flag {
STYLE_THREAD_POOL.style_thread_pool.as_ref()
} else {
None
};
if let Some(pool) = thread_pool {
// Parallel mode.
LayoutThread::solve_constraints_parallel(
pool,
FlowRef::deref_mut(root_flow),
profiler_metadata,
self.time_profiler_chan.clone(),
&*context,
);
} else {
//Sequential mode
LayoutThread::solve_constraints(FlowRef::deref_mut(root_flow), &context)
}
},
);
}
profile(
profile_time::ProfilerCategory::LayoutStoreOverflow,
self.profiler_metadata(),
self.time_profiler_chan.clone(),
|| {
sequential::store_overflow(context, FlowRef::deref_mut(root_flow) as &mut dyn Flow);
},
);
self.perform_post_main_layout_passes(
data,
root_flow,
reflow_goal,
document,
rw_data,
context,
);
}
fn perform_post_main_layout_passes(
&self,
data: &Reflow,
mut root_flow: &mut FlowRef,
reflow_goal: &ReflowGoal,
document: Option<&ServoLayoutDocument>,
rw_data: &mut LayoutThreadData,
layout_context: &mut LayoutContext,
) {
// Build the display list if necessary, and send it to the painter.
self.compute_abs_pos_and_build_display_list(
data,
reflow_goal,
document,
FlowRef::deref_mut(&mut root_flow),
&mut *layout_context,
rw_data,
);
if self.trace_layout {
layout_debug::end_trace(self.generation.get());
}
if self.dump_flow_tree {
root_flow.print("Post layout flow tree".to_owned());
}
self.generation.set(self.generation.get() + 1);
}
fn reflow_all_nodes(flow: &mut dyn Flow) {
debug!("reflowing all nodes!");
flow.mut_base().restyle_damage.insert(
ServoRestyleDamage::REPAINT |
ServoRestyleDamage::STORE_OVERFLOW |
ServoRestyleDamage::REFLOW |
ServoRestyleDamage::REPOSITION,
);
for child in flow.mut_base().child_iter_mut() {
LayoutThread::reflow_all_nodes(child);
}
}
/// Returns profiling information which is passed to the time profiler.
fn profiler_metadata(&self) -> Option<TimerMetadata> {
Some(TimerMetadata {
url: self.url.to_string(),
iframe: if self.is_iframe {
TimerMetadataFrameType::IFrame
} else {
TimerMetadataFrameType::RootWindow
},
incremental: if self.first_reflow.get() {
TimerMetadataReflowType::FirstReflow
} else {
TimerMetadataReflowType::Incremental
},
})
}
}
impl ProfilerMetadataFactory for LayoutThread {
fn new_metadata(&self) -> Option<TimerMetadata> {
self.profiler_metadata()
}
}
// The default computed value for background-color is transparent (see
// http://dev.w3.org/csswg/css-backgrounds/#background-color). However, we
// need to propagate the background color from the root HTML/Body
// element (http://dev.w3.org/csswg/css-backgrounds/#special-backgrounds) if
// it is non-transparent. The phrase in the spec "If the canvas background
// is not opaque, what shows through is UA-dependent." is handled by rust-layers
// clearing the frame buffer to white. This ensures that setting a background
// color on an iframe element, while the iframe content itself has a default
// transparent background color is handled correctly.
fn get_root_flow_background_color(flow: &mut dyn Flow) -> webrender_api::ColorF {
let transparent = webrender_api::ColorF {
r: 0.0,
g: 0.0,
b: 0.0,
a: 0.0,
};
if !flow.is_block_like() {
return transparent;
}
let block_flow = flow.as_mut_block();
let kid = match block_flow.base.children.iter_mut().next() {
None => return transparent,
Some(kid) => kid,
};
if !kid.is_block_like() {
return transparent;
}
let kid_block_flow = kid.as_block();
let color = kid_block_flow.fragment.style.resolve_color(
kid_block_flow
.fragment
.style
.get_background()
.background_color,
);
webrender_api::ColorF::new(
color.red_f32(),
color.green_f32(),
color.blue_f32(),
color.alpha_f32(),
)
}
fn get_ua_stylesheets() -> Result<UserAgentStylesheets, &'static str> {
fn parse_ua_stylesheet(
shared_lock: &SharedRwLock,
filename: &str,
content: &[u8],
) -> Result<DocumentStyleSheet, &'static str> {
Ok(DocumentStyleSheet(ServoArc::new(Stylesheet::from_bytes(
content,
ServoUrl::parse(&format!("chrome://resources/{:?}", filename)).unwrap(),
None,
None,
Origin::UserAgent,
MediaList::empty(),
shared_lock.clone(),
None,
None,
QuirksMode::NoQuirks,
))))
}
let shared_lock = &GLOBAL_STYLE_DATA.shared_lock;
// FIXME: presentational-hints.css should be at author origin with zero specificity.
// (Does it make a difference?)
let mut user_or_user_agent_stylesheets = vec![
parse_ua_stylesheet(
&shared_lock,
"user-agent.css",
&resources::read_bytes(Resource::UserAgentCSS),
)?,
parse_ua_stylesheet(
&shared_lock,
"servo.css",
&resources::read_bytes(Resource::ServoCSS),
)?,
parse_ua_stylesheet(
&shared_lock,
"presentational-hints.css",
&resources::read_bytes(Resource::PresentationalHintsCSS),
)?,
];
for &(ref contents, ref url) in &opts::get().user_stylesheets {
user_or_user_agent_stylesheets.push(DocumentStyleSheet(ServoArc::new(
Stylesheet::from_bytes(
&contents,
url.clone(),
None,
None,
Origin::User,
MediaList::empty(),
shared_lock.clone(),
None,
Some(&RustLogReporter),
QuirksMode::NoQuirks,
),
)));
}
let quirks_mode_stylesheet = parse_ua_stylesheet(
&shared_lock,
"quirks-mode.css",
&resources::read_bytes(Resource::QuirksModeCSS),
)?;
Ok(UserAgentStylesheets {
shared_lock: shared_lock.clone(),
user_or_user_agent_stylesheets: user_or_user_agent_stylesheets,
quirks_mode_stylesheet: quirks_mode_stylesheet,
})
}
lazy_static! {
static ref UA_STYLESHEETS: UserAgentStylesheets = {
match get_ua_stylesheets() {
Ok(stylesheets) => stylesheets,
Err(filename) => {
error!("Failed to load UA stylesheet {}!", filename);
process::exit(1);
},
}
};
}
struct RegisteredPainterImpl {
painter: Box<dyn Painter>,
name: Atom,
// FIXME: Should be a PrecomputedHashMap.
properties: FxHashMap<Atom, PropertyId>,
}
impl SpeculativePainter for RegisteredPainterImpl {
fn speculatively_draw_a_paint_image(
&self,
properties: Vec<(Atom, String)>,
arguments: Vec<String>,
) {
self.painter
.speculatively_draw_a_paint_image(properties, arguments);
}
}
impl RegisteredSpeculativePainter for RegisteredPainterImpl {
fn properties(&self) -> &FxHashMap<Atom, PropertyId> {
&self.properties
}
fn name(&self) -> Atom {
self.name.clone()
}
}
impl Painter for RegisteredPainterImpl {
fn draw_a_paint_image(
&self,
size: Size2D<f32, CSSPixel>,
device_pixel_ratio: Scale<f32, CSSPixel, DevicePixel>,
properties: Vec<(Atom, String)>,
arguments: Vec<String>,
) -> Result<DrawAPaintImageResult, PaintWorkletError> {
self.painter
.draw_a_paint_image(size, device_pixel_ratio, properties, arguments)
}
}
impl RegisteredPainter for RegisteredPainterImpl {}
struct RegisteredPaintersImpl(FnvHashMap<Atom, RegisteredPainterImpl>);
impl RegisteredSpeculativePainters for RegisteredPaintersImpl {
fn get(&self, name: &Atom) -> Option<&dyn RegisteredSpeculativePainter> {
self.0
.get(&name)
.map(|painter| painter as &dyn RegisteredSpeculativePainter)
}
}
impl RegisteredPainters for RegisteredPaintersImpl {
fn get(&self, name: &Atom) -> Option<&dyn RegisteredPainter> {
self.0
.get(&name)
.map(|painter| painter as &dyn RegisteredPainter)
}
}
|
code
|
/*******************************************************************************
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*******************************************************************************/
package org.ofbiz.minilang.method.envops;
import java.text.MessageFormat;
import java.util.LinkedList;
import java.util.List;
import org.ofbiz.base.util.MessageString;
import org.ofbiz.base.util.UtilValidate;
import org.ofbiz.base.util.collections.FlexibleMapAccessor;
import org.ofbiz.base.util.string.FlexibleStringExpander;
import org.ofbiz.minilang.MiniLangException;
import org.ofbiz.minilang.MiniLangRuntimeException;
import org.ofbiz.minilang.MiniLangValidate;
import org.ofbiz.minilang.SimpleMethod;
import org.ofbiz.minilang.method.MethodContext;
import org.ofbiz.minilang.method.MethodOperation;
import org.w3c.dom.Element;
/**
* Implements the <string-to-list> element.
*
* @see <a href="https://cwiki.apache.org/OFBADMIN/mini-language-reference.html#Mini-languageReference-{{%3Cstringtolist%3E}}">Mini-language Reference</a>
*/
public final class StringToList extends MethodOperation {
private final FlexibleMapAccessor<List<? extends Object>> argListFma;
private final FlexibleMapAccessor<List<Object>> listFma;
private final String messageFieldName;
private final FlexibleStringExpander stringFse;
public StringToList(Element element, SimpleMethod simpleMethod) throws MiniLangException {
super(element, simpleMethod);
if (MiniLangValidate.validationOn()) {
MiniLangValidate.handleError("<string-to-list> element is deprecated (use <set>)", simpleMethod, element);
MiniLangValidate.attributeNames(simpleMethod, element, "list", "arg-list", "string", "message-field");
MiniLangValidate.requiredAttributes(simpleMethod, element, "list", "string");
MiniLangValidate.expressionAttributes(simpleMethod, element, "list", "arg-list");
MiniLangValidate.noChildElements(simpleMethod, element);
}
stringFse = FlexibleStringExpander.getInstance(element.getAttribute("string"));
listFma = FlexibleMapAccessor.getInstance(element.getAttribute("list"));
argListFma = FlexibleMapAccessor.getInstance(element.getAttribute("arg-list"));
messageFieldName = element.getAttribute("message-field");
}
@Override
public boolean exec(MethodContext methodContext) throws MiniLangException {
String valueStr = stringFse.expandString(methodContext.getEnvMap());
List<? extends Object> argList = argListFma.get(methodContext.getEnvMap());
if (argList != null) {
try {
valueStr = MessageFormat.format(valueStr, argList.toArray());
} catch (IllegalArgumentException e) {
throw new MiniLangRuntimeException("Exception thrown while formatting the string attribute: " + e.getMessage(), this);
}
}
Object value;
if (UtilValidate.isNotEmpty(this.messageFieldName)) {
value = new MessageString(valueStr, this.messageFieldName, true);
} else {
value = valueStr;
}
List<Object> toList = listFma.get(methodContext.getEnvMap());
if (toList == null) {
toList = new LinkedList<Object>();
listFma.put(methodContext.getEnvMap(), toList);
}
toList.add(value);
return true;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("<string-to-list ");
sb.append("string=\"").append(this.stringFse).append("\" ");
sb.append("list=\"").append(this.listFma).append("\" ");
if (!this.argListFma.isEmpty()) {
sb.append("arg-list=\"").append(this.argListFma).append("\" ");
}
if (!this.messageFieldName.isEmpty()) {
sb.append("message-field=\"").append(this.messageFieldName).append("\" ");
}
sb.append("/>");
return sb.toString();
}
/**
* A factory for the <string-to-list> element.
*/
public static final class StringToListFactory implements Factory<StringToList> {
@Override
public StringToList createMethodOperation(Element element, SimpleMethod simpleMethod) throws MiniLangException {
return new StringToList(element, simpleMethod);
}
@Override
public String getName() {
return "string-to-list";
}
}
}
|
code
|
Total Paddle will host 2 night - 8 hour midweek paddle camps.
Starting Friday September 21, 2018 Total Paddle will be offering weekend 2 night - 8 hour paddle clinics.
Check-in Friday at 5 PM; dinner will be served at 7:30 PM.
Total Tennis is home to Total Paddle, the All-Star Platform Tennis Camp featuring present and former National Champions who will be on court, directing and teaching many of our September and October clinics.
These National Champs are renowned as the nation's best paddle instructors. You and your team can prepare for the 2018 - 2019 season at one of our paddle camps at Total Tennis in Saugerties, NY. Total Paddle's All-Star Platform Tennis Camps are geared to players of all levels. If you want to get away for 2 nights or just join us for a 3 hour clinic, Total Paddle is just the environment you're looking for. You'll play on 4 new courts with the Catskill Mountains as the backdrop. Our courts are lit for evening play. If you are a member of a team, especially at a club without a paddle pro, this is a great opportunity to work on all aspects of your game with the nation's #1 paddle players as your pros.
Mike Gillespie will host a Performance Paddle Camp at Total Tennis Wednesday September 26 - Friday September 28, 2018.
|
english
|
मनु शर्मा का नाम जेसिका लाल हत्याकांड के बाद सुर्खियों में आया था. वह पूर्व केंद्रीय मंत्री विनोद शर्मा का बेटा है. जेसिका नई दिल्ली में रहने वाली एक मॉडल थी. घटना २९ अप्रैल १९९९ की है. उस दिन जेसिका एक पार्टी में बारमेड का काम कर रही थी. मनु शर्मा अपने दोस्तों के साथ उस पार्टी में मौजूद था.
मनु शर्मा सजायाफ्ता होते हुए कई बार पैरोल लेकर जेल से बाहर आया था (फाइल फोटो)
नई दिल्ली, ०३ जून २०२०, अपडेटेड १९:०८ इस्ट
जेल में अच्छे आचरण के चलते मिली थी पैरोल
मनु शर्मा को फरलो का भी मिला था फायदा
जेसिका लाल हत्याकांड के दोषी मनु शर्मा को १७ साल बाद आखिरकार जेल से रिहाई मिल ही गई. लेकिन इससे पहले भी उसके अच्छे आचरण की वजह से वह कई बार जेल से बाहर आया और वापस गया. ये सुविधा उसे पैरोल और फरलो की वजह से मिली थी. मुन एक बार नहीं बल्कि कई बार सजायाफ्ता कैदियों को मिलने वाली इन सुविधाओं के सहारे जेल से बाहर आया था.
दरअसल, मनु शर्मा का नाम जेसिका लाल हत्याकांड के बाद सुर्खियों में आया था. वह पूर्व केंद्रीय मंत्री विनोद शर्मा का बेटा है. जेसिका नई दिल्ली में रहने वाली एक मॉडल थी. घटना २९ अप्रैल १९९९ की है. उस दिन जेसिका एक पार्टी में बारमेड का काम कर रही थी. मनु शर्मा अपने दोस्तों के साथ उस पार्टी में मौजूद था. तभी वहां शराब परोसने को लेकर जेसिका का उन लोगों से कुछ विवाद हुआ और गोली चल गई. गोली सीधी जेसिका लाल को लगी और उसकी मौत हो गई. मामला पुलिस में जा पहुंचा. मनु शर्मा और उसके तीनों दोस्तों को गिरफ्तार कर लिया गया.
ज़रूर पढ़ेंः जेसिका लाल मर्डर केस के दोषी मनु शर्मा रिहा, उपराज्यपाल ने दी अनुमति
सात साल मुकदमा चला और २१ फरवरी २००६ को मनु शर्मा और उसके साथियों को जेसिका लाल हत्याकांड से बरी कर दिया गया. इस बात से जनता का गुस्सा भड़क गया. जबरदस्त दबाव के बाद अभियोजन पक्ष ने फिर अदालत में अपील दायर की. दिल्ली उच्च न्यायालय ने कार्रवाई करते हुए फास्ट ट्रैक पर दैनिक सुनवाई की. २५ दिनों तक केस चला और मनु शर्मा को जेसिका लाल की हत्या का दोषी करार दिया गया. इसके बाद २० दिसंबर २००६ के दिन मनु शर्मा को आजीवन कारावास की सजा सुनाई गई थी. तभी से मनु शर्मा जेल में बंद था.
२००९ में मिली थी पैरोल
हत्या का दोषी ठहराए जाने के बाद मनु शर्मा दस साल से तिहाड़ जेल में बंद था. इस दौरान उसने कई बार पैरोल के लिए आवेदन किया. जिसके चलते २४ सितंबर २००९ को मनु शर्मा को पैरोल मिल गई और उसे एक महीने के लिए छोड़ा गया. ये पैरोल मनु शर्मा को उसकी दादी के अंतिम संस्कार में हिस्सा लेने के लिए दी गई थी. बाद में उसके आवेदन पर दिल्ली सरकार ने पैरोल की अवधि एक महीना और बढ़ा दी थी. लेकिन इसी दौरान मनु शर्मा दिल्ली के एक डिस्कोथेक में देखा गया. ये खबर मीडिया में छा गई. मनु ने पैरोल नियमों को तोड़ा तो दिल्ली सरकार की काफी फजीहत हुई. इसके बाद उसे वापस जेल जाना पड़ा.
२०१२ में मिली थी फरलो
वर्ष २०१० में तिहाड़ जेल में फरलो की सुविधा लागू तो की गई लेकिन फांसी और आजीवन कारावास पाने वाले कैदियों को इस सुविधा से दूर रखा गया. हत्यांकाड में उम्रकैद की सजा काट रहे मनु शर्मा को फरलो के बारे में जानकारी मिली और उसने अपने वकीलों का माध्यम से फरलो के लिए आवेदन किया. मगर उपरोक्त कारण के चलते उस वक्त मनु शर्मा को फरलो नहीं मिली. साल २०१२ में ७ सितंबर को मनु शर्मा ने इस मामले को लेकर हाईकोर्ट में याचिका दायर की. जिसमें उसने कहा कि १९ महीने से उसके फरलो आवदेन पर विचार नहीं किया जा रहा है. इसके बाद कोर्ट ने सरकार से 1७ सितंबर २०१२ तक अपना जवाब देने को कहा. इसके बाद सरकार हरकत में आई. रिपोर्ट मांगी गई.
जेल में रहते वक्त मनु शर्मा का आचरण ठीक रहा. यही वो साल था, उसका फरलो आवेदन तिहाड़ जेल प्रशासन ने स्वीकृत कर लिया और सितंबर २०१२ में उसे एक सप्ताह के लिए फरलो दिया गया. यह पहला मौका था, जब मनु शर्मा को फरलो की सुविधा दी गई थी. इससे पहले बड़े अपराधों में शामिल कैदियों को फरलो नहीं दी जाती थी. मनु शर्मा को फरलो पर तिहाड़ जेल से २९ सितंबर को बाहर निकला गया और ७ अक्टूबर को उसकी जेल में वापसी हुई.
२०१४ में पढ़ाई के लिए मिली पैरोल
जेल में सजा काट रहे मनु शर्मा ने मानवाधिकार की पढ़ाई की. उसने मास्टर डिग्री हासिल की थी. परीक्षा के लिए दिसंबर २०१४ में उसे पैरोल मिली थी. दरअसल, मनु जेल में कैदियों को पेंटिंग सिखाता था. कैदियों के बच्चों की पढ़ाई का खर्च उठाता था. इसी आधार पर उसे पैरोल मिली थी. मनु ने जेल में रहते हुए कैदियों के सुधार की दिशा काम भी किया था. व्यवहार ठीक होने के कारण उसे पैरोल मिलती रही.
शादी के लिए २०१५ में मिली थी पैरोल
सुप्रीम कोर्ट ने कैदियों को भी शादी करने की छूट प्रदान की. इसी के तहत मनु शर्मा ने भी जेल में रहते हुए शादी रचाई. अप्रैल २०१५ में मनु को शादी करने के लिए पैरोल मिली थी. २२ अप्रैल २०१५ को चंडीगढ़ में उसकी शादी हुई थी. दोनों ने प्रेम विवाह किया था. लड़की के परिजन मुंबई से चंडीगढ़ आए थे. इस शादी में कुछ ही लोग शामिल हुए थे. मनु शर्मा के जीजा रजनीश कुमार, विनोद शर्मा के भाई श्याम सुंदर शर्मा, कार्तिक शर्मा, बहन डॉ प्राची शेट्टी और जीजा राजे शेट्टी आदि इन लोगों में शामिल थे. इससे कुछ माह पहले भी मनु को अपने छोटे भाई शादी के लिए एक सप्ताह की पैरोल मिली थी.
क्या है फरलो
फरलो पैरोल की तरह ही होता है. सजायाफ्ता कैदी को एक साल में तीन बार कुल सात सप्ताह के लिए फरलो दी जा सकती है. इसमें शर्त यह होती है कि कैदी कम से कम तीन साल की सजा काट चुका हो. जेल में उसका आचरण अच्छा हो. फरलो के आवेदन पर विचार करने का अधिकार जेल महानिदेशक को होता है. जबकि पैरोल में यह अधिकार सरकार और कोर्ट के पास होता है. जेल अधिकारियों के मुताबिक पहले हत्या, लूट, डकैती, अपहरण, दुष्कर्म, वसूली और आपराधिक प्रवृत्ति के कैदियों को यह सुविधा नहीं मिलती थी. बाद में फरलो के नियमों में कुछ बदलाव किए गए थे. तभी मनु को इसका लाभ मिला था.
क्या है पैरोल
जेल में बंद किसी भी कैदी को आपातकालीन स्थिति या विशेष परिस्थितियों में अल्प समय के लिए जेल रिहा किए जाने की प्रक्रिया पैरोल कहलाती है. इसमें कैदी से जुड़े हालात का जायजा लिया जाता है. उसके आचरण को भी देखा जाता है. पैरोल देने का अधिकार सरकार या कोर्ट के पास होता है. जबकि किसी कैदी को फरलो दिए जाने का अधिकार जेल महानिदेशक के पास होता है.
जेसिका मर्डर केस: उम्रकैद काट रहे मनु शर्मा ने हक में दी परोल की अर्जी
मनु शर्मा की पैरोल याचिका पर फैसला सुरक्षित
|
hindi
|
WARNING: Xylitol IS Poisonous To Your Dog!!
Xylitol is Poison For Your Dog!!
Xylitol is a sugar substitute used in items such as chewing gum, mints, nicotine gum, chewable vitamins, and oral-care products. It is also purchased in granulated form and used as a sweetener for cereals, beverages, and baked goods. Caution - Xylitol is very toxic to dogs.
Xylitol has grown in popularity during the past few years, primarily because it is considered a good sugar subsitute for those on low-carbohydrate diets as well as those concerned with the glycemic index of foods. Xylitol is also popular among diabetics because it does not cause dramatic peaks of insulin production after use.
Unfortunately, as the popularity of xylitol products has increased so has the number of reported toxic exposures to dogs. In 2003, the ASPCA’s Animal Poison Control Center reported three cases of xyli- tol poisoning. In 2005, 193 cases were reported. And during just the first half of 2006, they received 114 reported cases of xylitol poisoning in dogs.
Most all of these poisonings occurred due to unawareness. Pet owners did not know that Xylitol is a dog poison.
Old research sho wed the primary Xylitol side effects on dogs was hypoglycemia (low blood sugar). Recent research shows it has been discovered to produce acute and possibly life-threatening liver disease.
Dogs seem to absorb almost 100% of xylitol into their systems. Humans absorb only 50%. Only a small amount of xylitol is needed to produce toxic effects in dogs.
Watch for these symptoms. After ingesting xylitol, dogs may begin to vomit and develop hypoglycemia within an hour. Some dogs will develop liver failure within 12 to 24 hours after xylitol ingestion. One reported case involved a 3- year-old dog that ate five or six cookies containing the sweetener. It became ill 24 hours later and died the next day. If your dog ingests xylitol call your veterinarian immediately. Pet owners who use xylitol-sweetened products in their home need to be aware of its toxic effect on dogs. Please tell your friends and neighbors who own dogs. They need to ensure that their dogs do not get ahold of any of these products.
It could be as innocent as an owner sharing a cookie with their best friend. The results could be tragic.
Note: Xylitol’s effect on cats is currently unknown. Other sugar sweeteners such as aspartame, saccharin, sorbitol, mannitol, and sucralose are generally regarded as safe for dogs.
|
english
|
package controllers
import (
"fmt"
"net/http"
"github.com/Sirupsen/logrus"
"github.com/cwen0/tinMongo/models"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
)
func LoginGet(c *gin.Context) {
c.HTML(http.StatusOK, "login/login", nil)
}
func LoginPost(c *gin.Context) {
session := sessions.Default(c)
auth := &models.Auth{}
response := Wrapper{}
if err := c.BindJSON(auth); err != nil {
logrus.Errorf("BindJSON failed: %v", err)
response.Errors = &Errors{Error{
Status: http.StatusBadRequest,
Title: "Please, fill out form correctly!!",
}}
c.JSON(http.StatusBadRequest, response)
return
}
mongo, url, err := auth.Connect()
if err != nil {
response.Errors = &Errors{Error{
Status: http.StatusBadRequest,
Title: "Authentication failed!!",
}}
logrus.Errorf("Login error, HostName: %s, Port: %d, UserName: %s, Password: %s, Database: %s", auth.HostName, auth.Port, auth.UserName, auth.Password, auth.Database)
c.JSON(http.StatusBadRequest, response)
return
}
err = models.InitMongo(mongo)
if err != nil {
logrus.Errorf("Init mgo session failed: %v", err)
response.Errors = &Errors{Error{
Status: http.StatusInternalServerError,
Title: fmt.Sprintf("Init mgo session failed: %v", err),
}}
c.JSON(http.StatusInternalServerError, response)
}
//session.Set("mongo", fmt.Sprintf("%s:%s@%s:%d", auth.UserName, auth.Password, auth.UserName, auth.Port))
session.Set("host", auth.HostName)
session.Set("port", auth.Port)
session.Set("url", url)
session.Save()
logrus.Info("Login sucess")
c.JSON(http.StatusOK, response)
}
|
code
|
\begin{document}
\title{Answer to an open question concerning the $1/e$-strategy for best choice under no information.}
\author{F. Thomas Bruss\footnote{F.\,Thomas Bruss, Universit\'e Libre de Bruxelles,
D\'epartement de Math\'ematique, CP 210, B-1050 Brussels, Belgium (tbruss@ulb.ac.be)} and L.C.G. Rogers\footnote{L.C.G. Rogers, Statistical Laboratory, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB,
United Kingdom (lcgr1@cam.ac.uk)}\\Universit\'e Libre de Bruxelles and University of Cambridge}
\maketitle
\centerline{\bf In Memory of} \centerline {\bf Professor Larry Shepp}
\noindent
\begin{abstract}
This paper answers a long-standing open question concerning the $1/e$-strategy for the problem of best choice. $N$ candidates for a job arrive at times independently uniformly distributed in $[0,1]$. The interviewer knows how each candidate ranks relative to all others seen so far, and must immediately appoint or reject each candidate as they arrive. The aim is to choose the best overall. The $1/e$ strategy is to follow the rule: `Do nothing until time $1/e$, then appoint the first candidate thereafter who is best so far (if any).'
The question, first discussed with Larry Shepp in 1983, was to know whether the $1/e$-strategy is optimal if one has `no information about the total number of options'. Quite what this might mean is open to various interpretations, but we shall take the proportional-increment process formulation of \cite{BY}. Such processes are shown to have a very rigid structure, being time-changed {\em pure birth processes}, and this allows some precise distributional calculations, from which we deduce that the $1/e$-strategy is in fact not optimal.
\end{abstract}
\noindent
{\bf Keywords} Optimal stopping, secretary problem, quasi-stationarity, Pascal process, proportional increments, pure birth process, well-posed problem,
R\'enyi's theorem of relative ranks, Hamilton-Jacobi-Bellman equation.
\noindent{\bf MSC 2010 Subject Code}: 60G40
\section{Dedication and background}
At the evening of Professor Larry Shepp's talk ``Reflecting Brownian Motion" at Cornell University on July 11, 1983 (13th Conference on Stochastic Processes and Applications), Professor Shepp and Thomas Bruss ran into each other in front of the Ezra Cornell statue. Thomas was honoured to meet Prof. Shepp in person, but Larry replied ``What are you working on?" And so Larry was the very first person with whom Thomas could discuss the {\it $1/e$-law of best choice} resulting from the {\it Unified Approach} \cite{B84} which had been accepted for publication shortly before.
Thomas was pleased to see the true interest Prof. Shepp showed for the $1/e$-law. As many of us have seen before, when Larry was interested in a problem, elementary or not, then he was really deeply interested.
This article deals with an open question concerning the so-called $1/e$-strategy for the problem of best choice, which is to wait and do nothing until time $1/e$ and then to accept the first candidate (if any) who is best so far. The question which attracted our particular interest was whether this strategy is optimal if one has no initial information about the number $N$ of candidates. Bruss drew also attention to this open question in his own talk ``The $e^{-1}$-law in best choice problems" at Cornell on July 14, 1983, and re-discussed it with Larry at several later occasions.
A written record of somewhat related questions appeared on page 885 of \cite{B84} where he stated the conjecture that the $1/e$-strategy is optimal in certain two-person games for a decision maker who faces an adversary trying to minimize the win probability. However, the two-person game situation is quite different from the open question discussed with Larry and will not be considered in this paper.
As far as we are aware, the last time the question discussed with Larry was addressed was in \cite{BY}, and this may actually be the only written reference to the real open question. \cite{BY} studied another no-information stopping problem, the so-called last-arrival problem (l.a.p.). To prepare the paper's main result, they examined the hypothesis of no-information in a detailed way, and their conclusions will be used in the present paper. \cite{BY} also used these to give, as a side-result, an alternative proof of the $1/e$-law. However, as they pointed out, their approach did not contribute new insights for the open question.
The present article proves that the $1/e$-strategy is {\it not} optimal under the interpretation of `no information' used by \cite{BY}. It thus closes a 37-years gap.
\section{The Unified Approach}
The so-called {\it 1/e-law of best choice~} is a result obtained in the {\it Unified Approach}-model of \cite{B84}. The model is as follows:
\begin{quote}{\bf Unified Approach}: Suppose $N>0$ points are IID $U[0,1].$
Points are marked with qualities which are supposed to be uniquely rankable from $1$ (best) to $N$ (worst), and all rank arrival orders are supposed to be equally likely. The goal is to maximize the probability of stopping online, and without recall on a preceding observation, on rank 1. \end{quote}
\noindent This model was suggested for the best choice problem (secretary problem) for an unknown number $N$ of candidates. (More general payoff-functions for the same model were studied in \cite{BrussSamuels}.)
Now, if we contemplate the probability of picking the best candidate, we immediately face the question `What is $N$?' If $N$ is fixed and known, this is just the classical secretary problem. But if we take a Bayesian point of view and suppose a prior distribution for $N$, with arrivals coming at times $t \in \mathbb N$,
\cite{abdel} showed that the problem may not only lead to so-called {\it stopping islands} (\cite{presman}), but, much worse, that for any $\epsilon>0$ there exists a sufficiently unfavorable distribution $\{P(N=n)\}_{n=1,2, \cdots}$ to reduce the value of the optimal to less than $\epsilon.$ In other words, if $N$ is allowed an arbitrary prior, optimality may mean almost nothing.
These discouraging facts prompted efforts to find more tractable models, such as the model of \cite{stewart}, and the one of \cite{cowan} and its generalisation studied in \cite{bruss88}.
The philosophy behind the unified approach of \cite{B84} was different. The approach was to suppose that arrival times are in $[0,1]$ and to study so-called $x$-strategies, where you do nothing until time $x$, and thereafter pick the first record. One of the main results of that paper was that the $1/e$-strategy gives a success probability of at least $1/e$ whatever the prior distribution of $N$, and that no other $x$-strategy does this well. This robustness suggests that the $1/e$-strategy is somehow special, and the open question became natural.
It is relevant to mention here that a similar phenomenon of robustness shows up in different forms. One is what \cite{bruss1990conditions} called `quasi-stationarity', meaning essentially that the optimal strategy
may (even for rather general payoffs) hardly depend on the number of candidates observed so far.
More remarkably, for so-called Pascal processes, optimal strategies do not depend at all on the number of preceding observations (For their characterization see \cite{bruss1991pascal}).
\subsection{The open question}\label{conj}
First we need to be clear about what exactly we mean by optimality of a strategy under no information on $N.$ We see a counting process $(N_t)_{0\leq t \leq 1} $, $N_0=0$, and we define $\F_t = \sigma( N_u,u \leq t)$. The law of $(N_t)_{0\leq t \leq 1} $ is $P_\theta$ for some $\theta \in {\cal T}heta$, where $\{P_\theta: \theta \in {\cal T}heta\}$ is the collection of possible laws considered.
The notion that `we have no prior information at all on $N$' means that we are only going to consider strategies which are $(\F_t)$-stopping times. That is, the strategies allowed can only know the arrival times (and ranks) of the individuals, not the value of $\theta \in {\cal T}heta$. This is the viewpoint of classical statistics.
\noindent
To understand the sense of optimality, define the process $\rho$ by
\begin{eqnarray*}
\rho_t &=& \hbox{\rm overall rank of object arriving at $t$ if $\Deltaelta N_t =1$}
\\
&=& 0 \quad \hbox{\rm otherwise.}
\end{eqnarray*}
Let $\T$ denote the set of all $(\F_t)$-stopping times. Then the value of using $\tau \in \T$ is
\begin{equation}
R(\theta,\tau) = P_\theta[ \rho_\tau = 1 ].
\label{Rdef}
\end{equation}
We denote by $\tau^*$ the stopping time corresponding to the $1/e$ strategy, which is simply $\tau^* = \inf \{ t \geq 1/e: \rho_t = 1 \},$ where it is understood that $\tau^*=1$ if no such $t$ exists, and that, in this case, we lose by definition. In these terms, the open question is stated precisely as follows:
True or false
\begin{equation}
\forall \theta \in {\cal T}heta, \forall \tau \in \T, \qquad
R(\theta, \tau^*) \geq R(\theta, \tau)?
\label{1/e_conj}
\end{equation}
Of course, the set ${\cal T}heta$ of possible laws of $(N_t)_{0\leq t \leq 1} $ plays an important r\^ole in the conjecture. For example, if ${\cal T}heta$ contained just one law, under which $(N_t)_{0\leq t \leq 1} $ was the counting process of ten $U[0,1]$ arrival times, then clearly the $1/e$ strategy would not be optimal in the sense of \eqref{1/e_conj}. We shall shortly explain exactly what set of laws is considered here.
\subsection{A related problem.}\label{ss22}
We return to the related {\em last-arrival problem}
under no information (l.a.p.) studied in \cite{BY}.
In this model an unknown number $N$ of points are IID $U[0,1]$ random variables, and an observer, inspecting the interval $[0,1]$ sequentially from left to right, wants to maximise the probability of stopping online on the very last point. No information about $N$ whatsoever is given.
Only one stop is allowed, and this again without recall on preceding observations.
Central to the approach of \cite{BY} is the choice of the family ${\cal T}heta$ of laws of the counting process $(N_t)_{0\leq t \leq 1} $. These authors present arguments (based on the properties of IID $U[0,1]$ arrival times) to justify their focus on the family of what they call {\em proportional-increments (p.i.)} counting processes. We shall not repeat all the reasoning which leads to this choice of counting processes, but we show its basic motivation and explain why we take its implications as our starting point.
\cite{BY}
defined a p.i.-process as follows:
A stochastic process $(N_t)$ defined on a filtered probability space$(\Omega, {\cal F}, ({\cal F}_t), P)$ with natural filtration ${\cal F}_t=\sigma\{N_u: u\le 1\}$ is a p.i.-counting process on $]0,\infty[$, if $$\forall t ~{\rm with}~ N_t>0, ~\forall s \ge 0,$$
$$\mathrm E(N_{t+s}-N_t \,\Big |\, {\cal F}_t) = \frac{s}{t} N_t~a.s.$$ The meaning of {\it proportional} is well understood in this definition. Moreover, three out of the four conclusions 1.- 4. in \cite{BY}, implying this definition, are proved to be compelling for combining the IID $U[0,1]$ - hypothesis for arrival times with the hypothesis that no prior information on $N$ can be used. Only Conclusion 3 (on page 3244) makes a concession. Here these authors use an (unprovable) tractability argument to justify why an unknown random variable, of which the expectation must be zero, is put equal to zero.
Why a concession? It is important to note that, if one has no information on $N,$ then the time of the first arrival $T_1$ is a particularly delicate point. It is the smallest order statistic of all arrival $N$ times. However, it is exactly this one which escapes any distributional prescription because the no-information setting does not allow us to assume a prior distribution $\{P(N=n)\}_{n=1,2, \cdots}.$
Hence, if one wants to confine one's interest to a well-posed problem, as \cite{BY} did, one has to make a concession somewhere if one wants to properly define a relevant decision process in the no-information case.
The mentioned concession seemed the least restrictive and almost compelling, but, more importantly, \cite{BY} found a solid a-posteriori justification for their tractability argument. The solution of the l.a.p. they obtained for p.i.-processes
satisfied the criteria of \cite{hadamard}
for the solution of a well-posed problem. \cite{BY} found these criteria convincing.
Now note that the only difference between the l.a.p. and our open problem (how to find rank 1) is that we want to stop on the last record of the arrival process, and not on the last point. By the IID-hypothesis for arrival times of absolute ranks, R\'enyi's Theorem of relative ranks (\cite{renyi}) implies that the $k$th point is a record with probability $1/k$ independently of preceding arrivals. Thus the basic arrival process
$(N_t)$ is not affected and can be chosen exactly the same!
This is why, confining our interest to well-defined problems only, we suppose that $(N_t)$ is a p.i.-process in the sense of \cite{BY}, from which we take the following definition.
\begin{defin}
{\em A p.i.- counting process is a counting process whose compensator is $\lambda_t \equiv N_t/t$, so that ($t\in (0,1]$)
\begin{equation}
M_t \equiv N_{t \vee T_1} - N_{T_1} - \int_{T_1}^{t \vee T_1} \frac{N_s}{s}\; ds
\qquad \hbox{\rm is a martingale in its own filtration,}
\label{PIdef}
\end{equation}
where $T_1 \equiv \inf\{ t: N_t=1\}$ is the first jump time of the counting process.}
\end{defin}
The class ${\cal T}heta$ of counting processes will be the class of all p.i.-processes, and the meaning of all the notation appearing in the statement \eqref{1/e_conj} has now been defined.
\section{Analysis of the open question.}\label{S3a}
Our analysis starts with the following little result, whose proof is immediate from the statement.
\begin{Prop}\label{prop1}
Suppose that $(N)$ is a p.i.-counting process.
If we define $\tilde{N}(u) = N(e^u)$ for $u \in (-\infty,0]$, and $t_1
= \log T_1$, then
\begin{eqnarray*}
M(e^u) &=& N(e^u\vee T_1)-N(T_1) - \int_{T_1}^{e^u \vee T_1} \frac{N_s}{s}\; ds
\\
&=& \tilde{N}(u \vee t_1) - \tilde{N}(t_1) - \int_{t_1}^{u \vee t_1} \tilde{N}(s)
\; ds
\end{eqnarray*}
is a martingale in its own filtration, so $(\tilde{N})$ is a pure birth process, started with one individual at time $t_1$.
\end{Prop}
So the requirement that $(N_t)$ be a p.i.- counting process is not in fact very general - apart from the choice of the time $(\tilde{N})$ starts, the behaviour is uniquely determined!
\noindent
{\sc Remarks.} If we model a Poisson process with intensity $\lambda$ in a Bayesian fashion, suppose a prior density $f(\lambda) = \varepsilon \exp(- \varepsilon \lambda)$ for $\lambda$, then the posterior mean for $\lambda$ given $\F_t$ is $N_t/t(1+\varepsilon)$, so a PI counting process is in some sense a limit of a Poisson process where we put an uninformative prior over $\lambda$.
\bigbreak
If we run a pure birth process from $\tilde{N}_u=1$ ($u <0$) to time $0$, the PGF
of $\tilde{N}_0$ is easily shown to be
\begin{equation}
E[z^{\tilde{N}_0} \vert \tilde{N}_u = 1] = \frac{ze^{u}}{1-z(1-e^{u})}
\qquad (z \in [0,1]),
\label{pbp1}
\end{equation}
so that $\tilde{N}_0$ is 1+geometric($e^{u}$).
Obviously, from \eqref{pbp1} we deduce
\begin{equation}
E[z^{\tilde{N}_0} \vert \tilde{N}_u = k] = \biggl\lbrace
\frac{ze^{u}}{1-z(1-e^{u})} \biggr\rbrace^k
\qquad (z \in [0,1]),
\label{pbp2}
\end{equation}
Thus if we see a record in the process $(\tilde{N})$ at time $u <0$, at the arrival of the $n^{ \hbox{\rm th} }$
observation, the probability that this is the best overall will be
\begin{equation}
\tilde{\mathrm Pi}_n(u) \equiv E \biggl[ \;\frac{n}{\tilde{N}_0}\;
\biggl\vert\; \tilde{N}_u = n \biggr]
= n \int_0^1 \frac{dz}{z} \biggl\lbrace
\frac{ze^{u}}{1-z(1-e^{u})} \biggr\rbrace^n .
\label{pit}
\end{equation}
In terms of the original process $(N)$, if we see a record at the arrival of the
$n^{ \hbox{\rm th} }$ observation at time $t \in (0,1)$, then the probability that
this is the best overall is
\begin{equation}
\mathrm Pi_n(t) \equiv E \biggl[ \;\frac{n}{N_1}\;
\biggl\vert\; N_t = n \biggr]
= n \int_0^1 \frac{dz}{z} \biggl\lbrace
\frac{z t}{1-z(1-t)} \biggr\rbrace^n .
\label{pin}
\end{equation}
Clearly this has to be increasing in $t$, but from numerics it appears also to be decreasing in $n.$
We can prove that this has to be the case, as follows. If we fix $t \in (0,1)$ then conditional on $N_t=n$ we have that
\begin{equation}
\frac{N_1}{n} = \xi_n \equiv\frac{n + W_1+ \ldots + W_n}{n},
\end{equation}
where the $W_j$ are IID geometrics. Now $(\xi_n)$ is a reversed martingale in the exchangeable filtration, so $(1/\xi_n)$ is a reversed submartingale in the exchangeable filtration, so its expectation decreases with $n$.
\section{The value of a fixed threshold rule.}\label{one_over_e}
Suppose we use a fixed threshold rule, that is, we do nothing until $u\geq b$ and then we take the first record thereafter. The $1/e$ rule corresponds to the special case $b=-1$. What is the value of this?
If $\tilde{N}_{b} = n$, then the distribution of the number $Y$ of further observations is known, and is a negative binomial distribution:
\begin{equation}
P[ Y = y ] = q^y p^n \binom{n+y-1}{y} \qquad(y \geq 0),
\label{NBdist}
\end{equation}
where $p = \exp(b)$. Given that $Y=y$, the probability that the best comes after the first $n$ observations is $y/(n+y)$, and the probability that the first record after $u=b$ is actually the best is
\begin{equation}
P[ \hbox{\rm first record after $n$ is best}| \hbox{\rm best comes after first $n$,
$Y=y$}]
= \frac{1}{y} \sum_{j=1}^y \frac{n}{n+j-1} \;.
\end{equation}
Thus we have an expression for the probability that we pick the best using this rule:
\begin{equation}
P[ \hbox{\rm win}] = \sum_{y \geq 1} P[Y=y]\; \frac{n}{n+y}\; \sum_{j=1}^y\;
\frac{1}{n+j-1}.
\label{winprob}
\end{equation}
\subsection{The special case $n=1$.}\label{ss1}
Let us firstly observe that for $t \in (0,1)$
\begin{equation}
f_j(t) \equiv \sum_{k \geq j} \frac{t^k}{k} = \int_0^t \; \frac{s^{j-1}}{1-s} \; ds,
\label{fjdef}
\end{equation}
from which we see that $f_1(t) = -\log(1-t)$.
In the special case $n=1$, we have
\begin{eqnarray}
P[ \hbox{\rm win}] &=& \sum_{k \geq 1} q^k p\; \frac{1}{1+k}\; \sum_{j=1}^k\;
\frac{1}{j}\nonumber
\\
&=& pq^{-1} \sum_{j \geq 1} \; \frac{1}{j} \; f_{j+1}(q)\nonumber
\\
&=& pq^{-1} \int_0^q \sum_{j\geq 1} \frac{s^j}{j} \; \frac{ds}{1-s} \nonumber
\\
&=& pq^{-1} \int_0^q \biggl( \; \int_0^s \frac{dv}{1-v}
\; \biggr) \; \frac{ds}{1-s} \nonumber
\\
&=& \half\, pq^{-1} \biggl( \; \int_0^q \frac{dv}{1-v}
\; \biggr)^2 \nonumber
\\
&=& \half\, pq^{-1} \bigl( \; \log(1-q)
\; \bigr)^2 .\label{V1}
\end{eqnarray}
Similarly, from \eqref{pit} we have ($p \equiv 1-q \equiv e^u$)
\begin{eqnarray}
\tilde{\mathrm Pi}_1(u) &=& \sum_{k \geq 0}\frac{q^k p}{1+k} \nonumber
\\
&=& \frac{p}{q} \; f_1(q) \nonumber
\\
&=& - \frac{p}{q}\; \log(1-q).
\label{pi1}
\end{eqnarray}
\section{The Hamilton-Jacobi-Bellman equations.}\label{S3}
If $V_n(u)$ denotes the value of being at time $u\leq 0$ with $n$ events already observed, none of them at time $u$, then the HJB equations of optimal control for the $V_n$ are
\begin{eqnarray}
0 &=& \dot{V_n}(u) + n \biggl\lbrace \frac{n}{n+1} \, V_{n+1}(u) +
\frac{1}{n+1} \max \{ V_{n+1}(u), \tilde{\mathrm Pi}_{n+1}(u) \} -V_n(u) \; \biggr\rbrace
\nonumber
\\
&=&\dot{V_n} + n( V_{n+1} - V_n ) + \frac{n}{n+1} (\tilde{\mathrm Pi}_{n+1} -
V_{n+1})^+,
\label{HJB}
\end{eqnarray}
together with the boundary conditions $V_n(0)=0$.
The solution is then the value function $V_n(u).$
Now, if the answer to the open question is affirmative, then for all $n$, $V_n=V_n^*$, where $V^*$ is the value function for using the $1/e$ strategy, which we actually know reasonably explicitly; it is given by the right-hand side of \eqref{winprob}, where the dependence on the time $u<0$ comes via the parameter $p = e^u$ of the negative binomial distribution \eqref{NBdist}. Now by inspection of the HJB equations, we see that the optimal rule will be that we stop when we see a new record at time $u<0$ when there are $N_u = n$ observations in total if and only if
\begin{equation}
\tilde{\mathrm Pi}_n(u) > V_n(u).
\label{optstop}
\end{equation}
{\em If the $1/e$-strategy is optimal, this would say that}
\begin{equation}
\tilde{\mathrm Pi}_n(u) > V^*_n(u) \qquad \hbox{\rm if and only if}\quad u > -1.
\end{equation}
We can investigate this numerically by calculating $\tilde{\mathrm Pi}_n(-1)$ and $V^*_n(-1)$ and comparing them; the results are plotted in Figure \ref{ceg}.
\noindent
\begin{figure}
\caption{$\tilde{\mathrm Pi}
\label{ceg}
\end{figure}
We see that $\tilde{\mathrm Pi}_n(-1) > V^*_n(-1)$ for all $n$, and that the gap narrows as $n$ increases. Hence the answer to the open question is No. The $1/e$-strategy is not optimal.
\subsection{Analytic proof.}\label{ss2}
It is nice to see without resorting to numerics that the answer must be No, by considering the special case $n=1$. From \eqref{V1} and \eqref{pi1} we see that for the $1/e$ rule where $p = e^{-1}$, $u=-1$
\begin{equation}
\tilde{\mathrm Pi}_1(u) - V_1(u) = \frac{p}{2q}\bigl[\; -2\log(1-q) -
\bigl( \; \log(1-q)
\; \bigr)^2
\;\bigr] = p/2q
\label{gap}
\end{equation}
which is clearly positive.
\subsection{Should this have been obvious?}
A closer look at \eqref{gap} may attract our attention; we may be surprised how big the difference between
$\tilde \mathrm Pi_1(u)$ and the performance of the $1/e$-strategy can actually become, namely the first one is twice the second one for $u=-1$ (i.e. for $t=1/e$ in $[0,1]$-time.)
Is it then not surprising that the non-optimality of the $1/e$-strategy did not follow already from simpler comparisons?
No, not easily. First note that we compare here a conditional win probability, given that we have a (granted) record at time $T_1\le 1/e$, with the unconditional win probability of the 1/e-strategy. Fortunately, this was all that was needed to show that the $1/e$-strategy is not optimal, namely showing that there are situations where it is definitely strictly sub-optimal
to pass over the first arrival, even when arriving at some time $1/e-\epsilon$ with $\epsilon >0$.
This however does, a priori, not say much about the absolute win probability of the $1/e$-strategy! To see this, suppose that for some small $\epsilon>$ we have $T_1\in[1/e-\epsilon,1/e+\epsilon]$, and that $N$ is not large. The latter is quite probable if $T_1$ is close to $1/e.$ Then $T_1$ will be almost equiprobable in the left or right half of $[1/e-\epsilon,1/e+\epsilon]$ for $\epsilon$ sufficiently small. If it is in the right half, however, the $1/e$-strategy
will accept it all the same, but now with a strictly greater win probability since $\tilde \mathrm Pi_1(t)$ is strictly increasing in $t$.
A second reason why
non-optimality is not that evident lies in the interplay of time and the number of arrivals (see (6) and (8)).
If we have (by simpler estimates) no sufficient incentive to accept at time $t<1/e$ a record, being the $n$th arrival, say, we have even less incentive if it was the $(n+1)$th arrival. With each additional arrival before time $1/e$ increases the expected number of arrivals thereafter, and so does the expected number of those arriving after time $1/e.$ This increases the incentive to wait, but then we also have to wait for a record! This may bring us quickly behind
time $1/e.$ If we get there a record, then, as said in the first scenario, so much the better.
These two scenarios exemplify how important it is to have precise answers, and not only estimates, even if one has reasonably good ones.
It is the preceding approach which offered these precise answers, and it made things definite:
In the no-information case, the $1/e$-strategy is {\it not} optimal.
\textwidth 14 cm
{\bf Acknowledgement}
The authors would like to thank very warmly Professor Philip Ernst for his precious indirect and direct contributions to this article. When Philip organised in 2018 the most memorable conference at Rice University in honour of Larry Shepp, he motivated many of us to look at harder problems. And so this old open problem turned up again and attracted attention. Philip's interest in this problem and his numerous interesting comments on, and discussions of, a former version of this paper, were very encouraging.
\end{document}
|
math
|
हम दोनों भाई एक हैं, हमारे बीच किसी तरह का कोई मतभेद नहीं- अभय चौटाला - युवा हरियाणा
ब्रेकिंग चर्चा में बड़ी ख़बरें युवा चैम्पियन शख्सियत हरियाणा हरियाणा विशेष
गुरुग्राम के बसई इलाके में इंडियन नेशनल लोकदल और बसपा पार्टी के कार्यकर्ताओं की कार्यकरणी की बैठक की गई। बैठक की अध्यक्षता अभय चौटाला ने की और इस मौके पर उनके साथ प्रदेश अध्यक्ष अशोक अरोड़ा भी मौजूद थे। वहीं इस बैठक के दौरान अभय ने एक सवाल के जवाब में कहा है कि अजय चौटाला उनके भाई है और दोनों भाई में किसी तरह का कोई मतभेद नहीं है। वो दोनों भाई एक हैं और एक ही रहेंगे।
दोनों के बीच कोई दूरी नहीं है पहले भी मिले है और अब भी रोजना मिलते है। कुछ लोग बेवजह अफवाह बना रहे हैं। वही दुष्यंत चौटाला ने जो चंड़ीगढ़ में आरोप लगया था कि उनके खिलाफ निर्धार कार्रवाई की गई है, उसके जवाब में अभय ने कहा कि कमेटी ने जो निर्णय लिया है वो सर्वमान्य है क्योंकि ओमप्रकाश चौटाला ने जिस कमेटी को जांच के लिए जिम्मेवारी सौंपी थी उसने पूरी जांच करने के बाद निर्णय लिया है। वहीं उन्होंने ये भी कहा कि पार्टी में किसी तरह का कोई मतभेद नहीं है।
गुरुग्राम में बैठक कार्यकर्ताओं को संबोधित करते हुए अभय चौटाल ने बीजेपी पर वार करते हुए कहा कि बीजेपी ने सिर्फ लोगों को बेवकूफ बनाने का काम किया है। जो वादे बीजेपी ने किए थे, उसमें से कोई भी वादा पूरा नहीं किया है। स्वामीनाथन कमेटी लागू न करने का जो वादा बीजेपी ने किसानों से किया उसे पूरा नहीं किया गया।
वहीं युवाओं को रोजगार देने की बात जो बार बार बीजेपी करती है उसमें पारदर्शिता नहीं लाई गई है। बल्कि युवाओं क रोजगार से वंचित किया गया है। बीजेपी की कार्यप्रणाली से हर वर्ग परेशान है। बीजेपी और कांग्रेस दोनों मिलकर इनेलो को तोड़ने का काम कर रही है, लेकिन इनेलो का जो कार्यकर्ता है वो इतना मजबूत है कि पार्टी पर कोई आंच नहीं आ सकती है। अभय ने ये भी कहा कि हाथी और चश्मा मिलकर इस बार हरियाणा में सरकार बनायेंगे। कांग्रेस और बीजेपी ने कई दफा इनेलो को कमजोर करने की कोशिश की, लेकिन वो इसमें सफल नहीं हो पायेंगे।
नगर निगम चुनाव जीतने के लिए भाजपा ने सरकारी मशीनरी का जमकर किया है दुरुपयोग इनेलो नेता गोपीचंद गहलोत
|
hindi
|
تہ اتھٔ ایمبر یوہس چھ ونان ماریولا (Morula) ژورٕ پیٹھ پٲنژن دوہن اندرٕ چُھ اتھ ماریولا بالہ منز اکھ کھوڑہیؤ بنان۔
|
kashmiri
|
"تٍر تہِ آسہِ ہے اپاری یوان۔" نوجوانن ووٚن لۅتی کتھ ژٹتھ۔
|
kashmiri
|
\begin{document}
\title{Comments on toric varieties}
\mathfrak{a}uthor{Howard~M Thompson \thanks{hmthomps@umich.edu}}
\mathfrak{m}aketitle
\begin{abstract}
Here are few notes on not necessarily normal toric varieties and resolution by toric blow-up. These notes are
independent of, but in the same spirit as the earlier preprint \cite{hT03a}. That is, they focus on the fact that toric
varieties are locally given by monoid algebras.
\end{abstract}
\section{Not necessarily normal toric varieties}
Fix a base field $k$. We start by recalling the definition of a fan. Let $N\cong\mathfrak{m}athbb{Z}^d$ be a lattice, that is, a finitely generated free Abelian group. A (strongly convex rational polyhedral) cone, $\sigma$, in $N_\mathfrak{m}athbb{R}=N\bigotimes_\mathfrak{m}athbb{Z}\mathfrak{m}athbb{R}$ is a set consisting of all nonnegative linear combinations of some fixed finite set of vectors in the lattice,
\[
\sigma=\mathfrak{m}athbb{R}_{\geq0}v_1+\cdots\mathfrak{m}athbb{R}_{\geq0}v_r,\mathfrak{q}quad v_1,\ldots,v_r\in N
\]
that contains no line. Here we identify $N$ with its image, $\{n\otimes1\mathfrak{m}id n\in N\}$, in $N_\mathfrak{m}athbb{R}$.
Let $M=\Hom(N,\mathfrak{m}athbb{Z})$ be the dual lattice to $N$, identify $M$ with its image in $M_\mathfrak{m}athbb{R}$, identify $M_\mathfrak{m}athbb{R}$ with the dual space to $N_\mathfrak{m}athbb{R}$, and let $\langle,\rangle$ be the dual pairing.
We say a $d-1$-dimensional subspace, $H$, of $N_\mathfrak{m}athbb{R}$ is a supporting hyperplane of $\sigma$ if there exists a vector $u\in M_\mathfrak{m}athbb{R}$ such that $H=\{v\mathfrak{m}id\langle u,v\rangle=0\}$ and $\sigma\subset\{v\mathfrak{m}id\langle u,v\rangle\geq0\}$. A face of a cone $\sigma$ is a subset of the form $H\cap\sigma$ where $H$ is a supporting hyperplane of $\sigma$.
A fan, $\Delta$ is a finite collection of cones that is closed under taking faces such that the intersection of any two cones in $\sigma$ is a face of each.
To each cone $\sigma$, we associate: (1) a finitely generated submonoid $S_{\sigma}=\sigma\spcheck\cap M$ of $M$, where $\sigma\spcheck=\{u\in M_\mathfrak{m}athbb{R}\mathfrak{m}id\langle u,v\rangle\geq0,\,\forall v\in\sigma\}$; (2) the finitely generated $k$-algebra $k[S_{\sigma}]$; and, (3) the affine $k$-variety $U_{\sigma}=\mathcal{S}pec k[S_{\sigma}]$. The (affine) toric variety associated to $\sigma$ is $U_{\sigma}$. If $\tau$ is a face of $\sigma$, $k[S_{\tau}]$ is a localization of $k[S_{\sigma}]$ and $U_{\tau}$ is an open affine subset of $U_{\sigma}$. Using these identifications, we associate an algebraic variety to a fan $\Delta$. We call this variety, $X_{\Delta}$, the toric variety associated to $\Delta$.
We say a submonoid $S\subseteq M$ is saturated if $S=\mathfrak{m}athbb{R}_{\geq0}S\cap\mathfrak{m}athbb{Z} S$. That is, a saturated monoid is the intersection of the lattice it generates with the cone it generates in the real vector space it generates. The monoids $S_{\sigma}$ are saturated. For a finitely generated submonoid of $M$, we call the monoid $S^{sat}=\mathfrak{m}athbb{R}_{\geq0}S\cap\mathfrak{m}athbb{Z} S$ the saturation of $S$. In fact, $S^{sat}=\mathfrak{m}athbb{Q}_{\geq0}S\cap\mathfrak{m}athbb{Z} S=\{u\in M\mathfrak{m}id nu\in S\text{ for some positive integer }n$. If $S$ is a finitely generated submonoid of $M$, then $S^{sat}$ is a finitely generated saturated submonoid of $M$. Hochster~\cite{mH72} proved the monoid algebra of a finitely generated saturated submonoid of $M$ is integrally closed. Evidently, $k[S^{sat}]$ is integral over $k[S]$. So, $k[S^{sat}]$ is integral the integral closure of $k[S]$.
In order to give a not necessarily normal version of toric varieties, we will abandon this description in terms of fans. More specifically, the duality in the step $\sigma\rightsquigarrow S_{\sigma}$ forces the normality of the scheme $X_{\Delta}$. Our approach will be to characterize the set of monoids $\{S_{\sigma}\mathfrak{m}id\sigma\in\Delta\}$ and then consider collections of monoids that satisfy all the conditions of our characterization except that of saturation.
First, note that such a set $\{S_{\sigma}\mathfrak{m}id\sigma\in\Delta\}$ consists of finitely generated saturated submonoids $S\subseteq M$ such that $\mathfrak{m}athbb{Z} S=M$. We will also rely on the following two facts. If $\sigma$ and $\tau$ are any two cones in $N$, then $(\sigma\cap\tau)\spcheck=\sigma\spcheck+\tau\spcheck$ (see Ewald~\cite[V.2.2~Lemma]{gE96}). And, if $S_{\tau}$ is a localization of $S_{\sigma}$, then $\tau$ is a face of $\sigma$. To see this, take $u\in M$ such that $S_{\tau}=S_{\sigma}+\mathfrak{m}athbb{N} u$. I claim $\tau=\{v\in\sigma\mathfrak{m}id\langle u,v\rangle=0\}$.
In light of these facts, here is the promised characterization: Let $N$ be a lattice with dual lattice $M$ and let $\mathcal{S}$ be a finite collection of finitely generated saturated submonoids of $M$ such that each $S\in\mathcal{S}$ generates $M=\mathfrak{m}athbb{Z} S$. Then, there exists a fan $\Delta$ in $N$ such that $\mathcal{S}=\{S_{\sigma}\mathfrak{m}id\sigma\in\Delta\}$ if and only if $\mathcal{S}$ is closed under localization and the sum of any two elements of $\mathcal{S}$ is a localization of each.
Let $\mathcal{S}$ be a finite collection of finitely generated submonoids of a lattice $M$ that is closed under localization such that the sum of any two elements of $\mathcal{S}$ is a localization of each and such that each element of $\mathcal{S}$ generates $M$. The (not necessarily normal) toric variety associated to $\mathcal{S}$ is obtained in the same manner as in the normal case. Such a collection yields a (generalized) fan $\mathcal{S}^{top}$ in the sense of Thompson~\cite{hT03a} by gluing the spectra of the monoids using the same prescription. If all the monoids in $\mathcal{S}$ are saturated, then this topological space is just the orbit space of the toric variety equipped with a sheaf of monoids. Henceforth, we will treat such collections as if they were the fans and we will write $X_{\Delta}$ for the (not necessarily normal) toric variety associated to the collection of monoids $\Delta$. We should note that the schemes formed this way really are varieties. The normalization of $X_{\Delta}$ is the normal toric variety associated to the fan in $\Hom(M,\mathfrak{m}athbb{Z})$ obtained by taking the collection $\{(\mathfrak{m}athbb{R}_{\geq0}S)\spcheck\mathfrak{m}id S\in\Delta\}$. In particular, since the normalization of $X_{\Delta}$ is separated, so is $X_{\Delta}$. And, $k[S]$ is a domain for each $S\in\Delta$ since it is a subring of the domain $k[M]\cong k[t_1,t_1^{-1},t_2,t_2^{-1},\ldots,t_d,t_d^{-1}]$.
In addition, the normalization map is a blow-up. Let $S$ is a finitely generated submonoid of $\mathfrak{m}athbb{Z} S=M$. Suppose $s,s'\in S$ are such that $s=s'-s''\in S^{sat}$. Write $t^{s'}$ (resp. $t^{s''}$) for the image of $s'$ (resp. $s''$) in $k[S]$ and consider the affine patches of the blow-up $\Bl_{(t^{s'},t^{s''})}(\mathcal{S}pec k[S])$ associated to $t^{s'}$ and $t^{s''}$. Since $ns\in S$ for some positive integer $n$, the patch obtained by making $t^{s''}$ the principle generator is $\mathcal{S}pec k[S+\mathfrak{m}athbb{N} s]$. And, $S\subseteq S+\mathfrak{m}athbb{N} s\subseteq S^{sat}$. On the other hand, the patch given by making the principal generator of the ideal sheaf $t^{s'}$ is $\mathcal{S}pec k[S+\mathfrak{m}athbb{N} s]_{t^{ns}}$, an open subset of the other patch. In particular, $s=ns+(n-1)(-s)\in S+\mathfrak{m}athbb{N}(-s)$ so $S+\mathfrak{m}athbb{N} s\subseteq S+\mathfrak{m}athbb{N}(-s)$, $ns\in S$ becomes invertible when one adjoins $-s$ to $S$, and $-s=(n-1)s+(-ns)$ so $S+\mathfrak{m}athbb{N}(-s)\subseteq S+\mathfrak{m}athbb{N} s+\mathfrak{m}athbb{N}(-ns)$. Since $S^{sat}$ is finitely generated and a composition of blow-ups is a blow-up, the normalization is a blow-up. In fact, fix a finite generating set $s_1,s_2,\ldots,s_m$ for $S^{sat}$ and a choice of pairs of elements $s'_i.s''_i\in S$ such that $s_i=s'_i-s''_i$ for each $i$. Now, let $I=\mathfrak{p}rod_{i=1}^m(t^{s'_i},t^{s''_i})$. Then, $\mathcal{S}pec k[S^{sat}]\cong\Bl_I(\mathcal{S}pec k[S])$. So, the normalization is also toric map in the sense of Thompson~\cite{hT03a}.
\section{Toric blow-ups and the toric variety associated to a lattice polyhedron}
For the rest of this paper, all toric varieties are normal unless stated otherwise.
Let $M$ be a lattice and let $\mathcal{P}$ be a full dimensional polyhedron in $M_\mathfrak{m}athbb{R}$. That is, $\mathcal{P}$ is an intersection of finitely many half-spaces with nonempty interior. We will say $\mathcal{P}$ is a lattice polyhedron if for all $d$ every $d$-face of $\mathcal{P}$ contains $d+1$ affinely independent lattice points. Let $\mathcal{P}$ be a lattice polyhedron. By replacing $\mathcal{P}$ with some positive integer multiple of $\mathcal{P}$ if necessary, we may assume every face of $\mathcal{P}$ has a lattice point in its relative interior. We will now describe a collection of submonoids of $M$ associated to $\mathcal{P}$ in such a way that when $\mathcal{P}$ is a polytope (that is, when $\mathcal{P}$ is bounded) the toric variety obtained this way is (abstractly) isomorphic to the projective toric variety that is traditionally associated to this polytope. To this end, for each face $F$ of $\mathcal{P}$, fix a lattice point in its relative interior $u_F$. We associate the monoid $S_F=\mathfrak{m}athbb{R}_{\geq0}(\mathcal{P}-u_F)\cap M$ where $\mathcal{P}-u_F=\{u-u_F\mathfrak{m}id u\in\mathcal{P}\}$ to the face $F$ and the set $\{S_F\mathfrak{m}id F\text{ is a face of }\mathcal{P}\}$ to $\mathcal{P}$. The toric variety obtained this way is quasi-projective. To see this when $\mathcal{P}$ is unbounded, further intersect $\mathcal{P}$ with a half-space in such a way as to obtain a polytope that has facets parallel to each facet of $\mathcal{P}$. I claim the toric variety associated to $\mathcal{P}$ is the open subvariety obtained from the toric variety of the polytope by removing the divisor corresponding to the facet contained in the hyplane bounding the new half-space.
We will now give a local description of toric blow-up for toric varieties. Let $S$ be a finitely generated saturated submonoid of the lattice $M=\mathfrak{m}athbb{Z} S$ and let $\mathfrak{a}$ be an integrally closed ideal of $S$. That is, $\mathfrak{a}\subseteq S$, $\mathfrak{a}+S=\mathfrak{a}$, and $\mathfrak{a}$ is the intersection of the convex hull of $\mathfrak{a}$ and $M$ in $M_\mathfrak{m}athbb{R}$. Let $I\subseteq k[S]$ be the ideal generated by $\{t^s\mathfrak{m}id s\in\mathfrak{a}\}$. Since the convex hull of $\mathfrak{a}$ is a lattice polyhedron, we have two ways to associate a toric variety to $\mathfrak{a}$. We could take the toric variety associated to the convex hull or we could take the blow-up $\Bl_I(\mathcal{S}pec k[S])$. These two toric varieties are isomorphic.
Here is a sketch of the proof: We may replace $I$ with a power of $I^n$ without changing the blow-up and we may replace $\mathfrak{a}$ with $n\mathfrak{a}$ without changing the toric variety assicated to the convex hull. So, we may assume the relative interior of each face of the convex hull contains a lattice point by making such simultaneous replacements with $n$ large enough. Fix a generating set for $\mathfrak{a}$ and notice that if $s$ is one of the generators, then the affine patch of the blow-up where $t^s$ is principle is isomorphic to $\mathcal{S}pec k[S_F]$ where $F$ is the unique face of the convex hull such that $s$ is in its relative interior. In other words, this patch is isomophic to $\mathcal{S}pec k[S_F]$ where $F$ is the smallest face of the convex hull containing $s$.
In particular, the faces of the convex hull of any integrally closed ideal $\mathfrak{a}\subseteq S$ are in inclusion preserving bijection with the torus invariant pieces of $\Bl_I(\mathcal{S}pec k[S])$ where $I=(t^s)_{s\in\mathfrak{a}}$.
\section{Simplicialization of non-simplicial normal toric varieties}
Let $X=X_{\Delta}$ be a toric variety. We will consider the cokernel of the standard map from the Picard group of $X$ to the (Weil) divisor class group of $X$. Or equivalently, we will consider the cokernel of the standard map from the torus invariant Cartier divisors to the torus invariant Weil divisors:
\[
\xymatrix@C=0.5cm{
0 \mathfrak{a}r[r] & \Div_TX \mathfrak{a}r[rr] && \bigoplus_{i=1}^r\mathfrak{m}athbb{Z}\cdot D_i \mathfrak{a}r[rr] && G \mathfrak{a}r[r] & 0 }
\]
$G$ is a finitely generated Abelian group. This group is finite if and only if $X$ is simplicial. And, it is trivial if and only if $X$ is smooth. In other words, a toric variety is simplicial if and only if every Weil divisor is $\mathfrak{m}athbb{Q}$-Cartier. Furthermore, a toric variety is smooth if and only if every Weil divisor is Cartier.
To see $X$ is simplicial when $G$ is finite, we work locally: Let $S\neq M$ be a finitely generated saturated submonoid of a lattice $M=\mathfrak{m}athbb{Z} S$ and let $\mathfrak{p}$ be an $S$-graded height one prime of $k[S]$. Here $\mathfrak{a}=\{s\in S\mathfrak{m}id t^s\in\mathfrak{p}$ is a prime ideal of $S$. That is, $s\in\mathfrak{a}$ or $s'\in\mathfrak{a}$ whenever $s+s'\in\mathfrak{a}$ and $\mathfrak{a}+S=\mathfrak{a}$. The complement of $\mathfrak{a}$ in $S$ generates a supporting hyperplane $H_{\mathfrak{p}}$ of $\mathfrak{m}athbb{R}_{\geq0}S$ such that $F_{\mathfrak{p}}=H_{\mathfrak{p}}\cap\mathfrak{m}athbb{R}_{\geq0}S$ is a facet (maximal proper face) of $\mathfrak{m}athbb{R}_{\geq0}S$. For any $u\in M$, $\ord_{\mathfrak{p}}(t^u)$ is, up to sign, the lattice distance from $s$ to $H$. A positive integer multiple $mD$ of the divisor $D$ corresponding to $\mathfrak{p}$ consists of the $m$th symbolic power of $\mathfrak{a}$, $\mathfrak{a}^{(m)}=\{s\in S\mathfrak{m}id\ord_{\mathfrak{p}}(t^s)\geq m\}$. Think of this as the lattice points in the cone over $S$ that lie above the hyperplane parallel to $H_{\mathfrak{p}}$ at lattice height $m$. If the image of $D$ has finite order $m$ in $G$, then $\mathfrak{a}^{(m)}$ is a principal ideal. In this case, the principal generator of $\mathfrak{a}^{(m)}$ must lie in the facet $F_{\mathfrak{q}}$ for every $S$-graded height one prime $\mathfrak{q}\neq\mathfrak{p}$ of $k[S]$. This is due to the fact that every such facet contains lattice ponts arbitrarily far away from $H_{\mathfrak{p}}$ and both $s$ \& $s'$ must lie in a face $F$ of the cone over $S$ whenever $s+s'\in F$ since the faces of this cone are exactly the complements of the prime ideals of $S$. In particular, if $G$ is finite, the intersection of all but one of the facets of the cone over $S$ contains a nonzero lattice point for every $S\in\Delta$. This forces each of these cones to be simplicial. The converse is a standard fact.
In the special case when $X=\mathcal{S}pec k[S]$, we write $\mathfrak{m}athbb{C}l(S)$ for the cokernel because it is the divisor class group of $S$. It is straightforward to see that if $D$ is a torus invariant Weil divisor whose image in $G$ has finite order, then this order is the least common multiple of the orders of the primes $\mathfrak{p}$ corresponding to $D$ in each $\mathfrak{m}athbb{C}l(S)$ such that the point corresponding to $S$ in the generalized fan the orbit space) lies on the image of $D$. In particular, if $X_{\sigma}$ is the toric variety associated to a simplicial cone $\sigma$ in $N$, then the order of our group is the multiplicity of $\sigma$. The claim, ``A toric variety is smooth if and only if every Weil divisor is Cartier'' is an easy consequence of this fact.
Let $X=\mathcal{S}pec k[S]$ be a non-simplicial toric variety, let $\mathfrak{p}$ is a height one prime whose image in $\mathfrak{m}athbb{C}l(S)$ has infinite order, let $m$ be the least common multiple of the heights of the first lattice points on each one-dimensional face of the cone over $S$ not contained in the facet $F_{\mathfrak{p}}$, and let $\mathfrak{a}$ be as before. In this case, the convex hull of $\mathfrak{a}^{(m)}$ is the half-space above the hyperplane parallel to $H_{\mathfrak{p}}$ at a lattice height $m$ above $H_{\mathfrak{p}}$ and the cone over $S$. So, there is a one-to-one correspondence between the torus invariant divisors of $X$ and those of the blow-up of the $m$th symbolic power of $\mathfrak{p}$. Therefore, this blow-up $\widetilde{X}\to X$ makes $X$ more simplicial without introducing new torus invariant Weil divisors because the rank of $G$ goes down. More generally, if $D$ is a torus invariant Weil divisor on a toric variety whose image in $G$ has infinite order, for each $S$ on $D$ we have this number $m$ as in the affine case. Let $m'$ be the least common multiple of these numbers. When we blow-up $m'D$ we get a toric variety that is more simplicial without introducing any new torus invariant Weil divisors. Repeatedly doing this simplicializes $X$ without introducing new invariant Weil divisors. A study of how the geometry a non-simplicial toric variety is reflected in the finitely many simplicial toric varieties obtained this way might prove interesting.
It is difficult to find the following fact in the literature: If $X$ is a toric variety, then there exists a toric resolution of singularities $\mathfrak{p}i:\widetilde{X}\to X$ such that $\mathfrak{p}i$ is a projective morphism. I discovered this fact through the considerations above. In hindsight though, this what happens if one uses only steller subdivision in the resolution in standard description given by fans. Our simplicializations exactly correspond to taking rays in the fan in $N$ that are contained in nonsimplicial cones and one at a time taking steller subdivisions along them. The standard way to resolve simplicial toric varieties is by steller subdivision. We have nothing new to add other than specifying which ideals are being blown-up.
\mathfrak{p}rovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\mathfrak{p}rovidecommand{\mathfrak{m}athcal{M}R}{\relax\ifhmode\unskip\space\fi MR }
\mathfrak{p}rovidecommand{\mathfrak{m}athcal{M}Rhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\mathfrak{p}rovidecommand{\href}[2]{#2}
\end{document}
|
math
|
آکسفورڈ انگلش ڈکشنری (OED) تہٕ آن لائن ویکشنری چِھ امسند لاطینی ماخذس assumere کہ طور پأٹھۍ ظأہر کران (""accept, to take to oneself, adopt, usurp) ، یم زن کہ (""to, to, at) تہٕ کا مجموعہ چِھ۔ sumere (نیُن)
|
kashmiri
|
आठ वर्षिय हंस बलाईंड मर्डर केश मे चचेरे भाई पवन उर्फ भोला को क्राईम ब्रांच डी.एल.एफ ने किया गिरफतार। - ज़ीहरयाणा
चचेरे भाई की हत्या में क्राइम ब्रांच डी.एल.एफ को मिली एक और बड़ी सफलता।
मृतक के दादा रमेशचन्द की शिकायत पर थाना तिगांव में दिनांक २४.०८.२०१८ को मुकदमा न० १८० धारा ३०२, ३६५, २०१ आई.पी.सी. के तहत मुकदमा दर्ज किया गया था।
मामले की गंभीरता को देखते हुए पुलिस आयुक्त अमिताभ सिंह ढिल्लोे ने आरोपियों को धर-पकड की जिम्मेवारी क्राइम ब्रांच डी.एल.एफ को दी जिस पर एक टीम का गठन किया गया। जो निम्न प्रकार से हैः-
पुलिस टीमः- निरीक्षक नवीन कुमार, निरीक्षक राजबीर, तिगाव , उपनिरीक्षक सत्यनारायण थाना तिगांव, उप निरीक्षक जमील सिंह क्राइम ब्रांच डी.एल.एफ, सहायक उपनिरीक्षक अश्वनी, सहायक उपनिरीक्षक अशरूदिन, सिपाही आदित्य, सिपाही संदीप (ड्राईवर)।
विद्धीत हो कि दिनांक २३.०८.२०१८ की शाम हंस कृष्णा मेडिकल की सीसीटीवी फुटेज मे ४ बजकर २८ मिनट पर अपने पापा राजपाल की दुकान से अपने घर के सामने से खेलता हुआ तिंगाव मन्डी की तरफ जाता हुआ दिखाई दे रहा था।
डी .एल.एफ. क्राइम ब्रांच प्रभारी इंस्पेक्टर नवीन व उनकी टीम ने हंस की हत्या के आरोपी तक पुहॅचने की हर संभव कोशिश की, जांच में सभी पहलुओ को मध्यनजर रखते हुए काफी लोगो से पूछताछ की गई। विशेष सूत्रों, सी.सी.टी.वी फूटेज, व अपनी सूझबुझ से तफतीश के अनुसार भोला (मृतक हंस का चचेरा भाई) शक के दायरे में लिया गया। लेकिन भोला शरीर से विकलांग था ओर परिवार वाले उससे पूछताछ मे सहयोग नहीं कर रहे थे।
रविवार को डी.एल.एफ.क्राइम ब्रांच इंचार्ज निरीक्षक नवीन कुमार ने हंस के माता पिता व मामा को क्राइम ब्रांच मे बुला कर उनको समझाया कि हमे आपके भतीजे भोला से हंस की हत्या के संबंध में पूछताछ करनी है। इसमें हमारा सहयोग करें।
डी.एल.एफ. क्राइम ब्रांच की टीम ने जब पवन उर्फ भोला से पूछताछ की तो उसने हत्या का राज खोल दिया, जिसको आज दिनांक १०.०९.१८ को उसके चचेरे भाई हंस के मर्डर मामले मे गिरफ्तार कर लिया ।
पूछताछ के दौरान आरोपी पवन उर्फ भोला ने बताया कि मेरे पिता जी की मोत बिमारी के कारण २००७ मे हो गई थी। हम तीन भाई है व एक बहन है हम सभी मेरे चाचा राजपाल के साथ ही रहते है।
मेरा चाचा राजपाल दारू पीकर हमे गाली गलोच करता रहता था व खर्चे के पैसे भी नही देता था अपने बच्चो को हर तरह की सुविधाएं देता था उनके इस व्यवहार के कारण उनके प्रति मेरे मन मे घृणा रहती थी।
जब हंस अपने पापा की दुकान से चला तो मै भी उसके पिछे पिछे चल दिया था, हंस मुझे मंडी के गेट पर मिल गया था, हंस को अपने साथ लेकर मंडी मे दुकानों की तरफ ले गया जहाँ पर एक दूकान का केवल लेंटर डला हुआ था मेने खाली दुकान मे खेलने के बहाने हंस को ले जाकर उसका पहले गला दबाया।जिससे उसकी आँख बाहर आ गई थी फिर ईंट से उसके सिर के पिछले हिस्से में चोट मार दी। जिससे उसकी मोत हो गई थी।
उसके बाद हंस को लेन्टर से नीचे बेसमेन्ट मे डाल दिया ओर वहाँ पर पडे कट्टो से उसको ढक दिया था। फिर मै मंडी मे घुमने लग गया। कुछ देर बाद बाइक से वहाँ शिवम आ गया फिर मे उसकी बाइक पर बैठकर अपने घर आ गया।
खाश बात यह है कि आरोपी शुरू दिन से हंस की तलाश करने अपने परिवार वालो के साथ उसको खोजने जाता था। यह बात उसने किसी को नही बताई थी।
हंस की डैडबाडी दो दिन बाद २५/८/1८ को तिगाव मंडी मे खाली पडे दुकान के बेसमेन्ट मे मिली थी।
इस केश को सफल बनाने के लिए डी.एल.एफ क्राइम ब्रांच, क्राइम ब्रांच सै० ६५ व थाना तिगांव की टीम लगातार मंडी का दोरा कर रही थी। आखिर तीनो टीम की मेहनत रगं लाई।
आज दिनांक १०/०९/१८ को आरोपी पवन उर्फ भोला को उसके घर तिंगाव फरीदाबाद से गिरफ्तार कर लिया है। जो कल आरोपी को अदालत मे पेश करके पुलिस रिमांड लिया जाएगा। गिरफ्तार आरोपी का विवरणः -१. पवन उर्फ भोला (२० वर्ष) पुत्र सतपाल निवासी गाँव तिगांव
|
hindi
|
Since 1977, GlassWorks has become one of the leading glass shower doors service provider in Glenview, Illinois. We supply a large range of doors to pick from. Have one of our professional installers install a new set of glass doors to your bathroom. Below are various glass shower doors we provide.
Frameless glass shower doors are GlassWorks’ most prominent kind of glass shower doors. Frameless Shower Enclosures are demanded by our clientele because of their clean and sleek appearance, unique feel, and simplicity of upkeep and cleaning. Our design experts will work with and for you to create a style and layout that takes full advantage of the look and performance of your bathroom. As a result of our experience and skill, we can provide design ideas and remarkable finishing touches others may never take into consideration.
GlassWorks’ Neo-Angle Glass Shower doors include personality, while saving space in compact washroom layouts. Given that of the intricacy involved, GlassWorks is a sector leader in creating, design and mounting neo-angle glass shower doors for our clients. Like our frameless glass shower doors, we start by reviewing your room in your house. Our style professional will deal with you to offer a style and design that makes best use of the appearance and performance of your whole bathroom. We could supply style ideas and dramatic touches others just never take into consideration considering that of our skills.
GlassWorks’ Steam Enclosures create a day spa experience in your very own house by taking your bath experience to a whole brand-new degree. At GlassWorks, we have established cutting-edge styles to make best use of the steam shower encounter. As component of our style procedure, we will certainly use our encounter to give you with the most ingenious and expert glass steam shower enclosures.
Splash Panels and Shower Shields provide an unique choice to typical bath doors and rooms. These doorless devices offer an open, minimalist layout for your tub or bath and can be used to relieve up space while still providing a significant bath encounter. GlassWorks’ splash panels and shower shields can be found in set and bi-fold panel styles. Our taken care of panel supplies one immobile splash guard, while the bi-fold splash panel design supplies one fixed panel and a 2nd one that could fold up out of the way to enable simple access to the tap.
GlassWorks’ framed glass shower doors add a striking design and value of surface that you will appreciate in your house for several years to follow. We collaborate with architects, artisans and designers to follow up with designs that establish our job apart in a Framed Shower Enclosure application.
GlassWorks has changed the conventional sliding glass shower doors by supplying innovative “all-glass” frameless appearances and the most recent equipment. Hydro-slide equipment basically removing any type of visible equipment and enabling the use of 3/8″ thick glass. Tranquility and pipeline designs providing impressive subjected tires and tracks. Curved moving glass wall surfaces that cover the shower enclosure. Allow us demonstrate how we can make our Sliding Doors the most talked about elements in your house.
Every one of our showers is customized made, some shower doors are distinct sufficient to warrant placement in the GlassWorks– Truly Custom Category. If you are looking for something genuinely distinct, our style experts will make your dream a fact. for your custom glass shower doors.
|
english
|
سٹیونز چھُ تٲریخَس منٛز واریٛاہَس کالَس تام کٲم کرَن وول ریپبلکن سینیٹر تہٕ یَتھ نومبرَس منٛز چھُ دوبارٕ ایلیٛکشنَس باپتھ تیار
|
kashmiri
|
# Rule 1.8.4
## Summary
This test consists in detecting informative object images and thus defining the applicability of the test.
Human check will be then needed to determine whether the detected elements containing text can be replaced by styled text.
## Business description
### Criterion
[1.8](http://references.modernisation.gouv.fr/referentiel-technique-0#crit-1-8)
###Test
[1.8.4](http://references.modernisation.gouv.fr/referentiel-technique-0#test-1-8-4)
### Description
Chaque image texte objet (balise `object` avec l'attribut `type="image/..."`) porteuse d'information, en l'absence d'un <a href="http://references.modernisation.gouv.fr/referentiel-technique-0#mMecaRempl">mécanisme de remplacement</a>, doit si possible être remplacée par du <a href="http://references.modernisation.gouv.fr/referentiel-technique-0#mTexteStyle">texte stylé</a>. Cette règle est-elle respectée (<a href="http://references.modernisation.gouv.fr/referentiel-technique-0#cpCrit1-8" title="Cas particuliers pour le critère 1.8">hors cas particuliers</a>) ?
### Level
**AA**
## Technical description
### Scope
**Page**
### Decision level
**Semi-decidable**
## Algorithm
### Selection
#### Set1
All the `<object>` tags with a `"type"` attribute that starts with "image/..." not identified as captcha (see Notes about captcha detection) (object[type^=image])
#### Set2
All the elements of **Set1** identified as informative image by marker usage (see Notes for details about detection through marker)
#### Set3
All the elements of **Set1** identified neither as informative image, nor as decorative image by marker usage (see Notes for details about detection through marker)
### Process
#### Test1
For each element of **Set2**, raise a MessageA
#### Test2
For each element of **Set3**, raise a MessageB
##### MessageA : Check text styled presence of informative image
- code : **CheckStyledTextPresenceOfInformativeImage**
- status: Pre-Qualified
- parameter : `"data"` attribute, tag name, snippet
- present in source : yes
##### MessageB : Check nature of image and text styled presence
- code : **CheckNatureOfImageAndStyledTextPresence**
- status: Pre-Qualified
- parameter : `"data"` attribute, tag name, snippet
- present in source : yes
### Analysis
#### Not Applicable
The page has no object images (**Set1** is empty)
#### Pre-Qualified
In all other cases
## Notes
### Markers
**Informative images** markers are set through the **INFORMATIVE_IMAGE_MARKER** parameter.
**Decorative images** markers are set through the **DECORATIVE_IMAGE_MARKER** parameter.
The value(s) passed as marker(s) will be checked against the following attributes:
- `class`
- `id`
- `role`
### Captcha detection
An element is identified as a CAPTCHA when the "captcha" occurrence is found :
- on one attribute of the element
- or within the text of the element
- or on one attribute of one parent of the element
- or within the text of one parent of the element
- or on one attribute of a sibling of the element
- or within the text of a sibling of the element
|
code
|
#!/usr/bin/env python3
import os
import sys
os.chdir(os.path.dirname(os.path.abspath(__file__)))
from direct.showbase.ShowBase import ShowBase
from direct.showbase.DirectObject import DirectObject
import panda3d.core as p3d
import blenderpanda
from bamboo.ecs import ECSManager, Entity
from bamboo.inputmapper import InputMapper
from lithium import components
# Load config files
p3d.load_prc_file('config/game.prc')
if os.path.exists('config/user.prc'):
print("Loading user.prc")
p3d.load_prc_file('config/user.prc')
else:
print("Did not find a user config")
class GameState(DirectObject):
def __init__(self):
# Setup a space to work with
base.ecsmanager.space = Entity(None)
base.ecsmanager.space.add_component(components.NodePathComponent())
spacenp = base.ecsmanager.space.get_component('NODEPATH').nodepath
spacenp.reparent_to(base.render)
# Load assets
self.level_entity = base.ecsmanager.create_entity()
self.level = loader.load_model('cathedral.bam')
level_start = self.level.find('**/PlayerStart')
self.level.reparent_to(spacenp)
for phynode in self.level.find_all_matches('**/+BulletBodyNode'):
if not phynode.is_hidden():
self.level_entity.add_component(components.PhysicsStaticMeshComponent(phynode.node()))
else:
print("Skipping hidden node", phynode)
self.player = base.template_factory.make_character('character.bam', self.level, level_start.get_pos())
# Attach camera to player
playernp = self.player.get_component('NODEPATH').nodepath
self.camera = base.ecsmanager.create_entity()
self.camera.add_component(components.Camera3PComponent(base.camera, playernp))
# Player movement
self.player_movement = p3d.LVector3(0, 0, 0)
def update_movement(direction, activate):
move_delta = p3d.LVector3(0, 0, 0)
if direction == 'forward':
move_delta.set_y(1)
elif direction == 'backward':
move_delta.set_y(-1)
elif direction == 'left':
move_delta.set_x(-1)
elif direction == 'right':
move_delta.set_x(1)
if not activate:
move_delta *= -1
self.player_movement += move_delta
self.accept('move-forward', update_movement, ['forward', True])
self.accept('move-forward-up', update_movement, ['forward', False])
self.accept('move-backward', update_movement, ['backward', True])
self.accept('move-backward-up', update_movement, ['backward', False])
self.accept('move-left', update_movement, ['left', True])
self.accept('move-left-up', update_movement, ['left', False])
self.accept('move-right', update_movement, ['right', True])
self.accept('move-right-up', update_movement, ['right', False])
def jump():
char = self.player.get_component('CHARACTER')
char.jump = True
self.accept('jump', jump)
# Mouse look
props = p3d.WindowProperties()
props.set_cursor_hidden(True)
props.set_mouse_mode(p3d.WindowProperties.M_confined)
base.win.request_properties(props)
def update(self, dt):
# Mouse look
char = self.player.get_component('CHARACTER')
cam = self.camera.get_component('CAMERA3P')
if base.mouseWatcherNode.has_mouse():
delta_yaw = base.mouseWatcherNode.get_mouse_x() * dt * 2000
delta_pitch = base.mouseWatcherNode.get_mouse_y() * dt * 2000
# Prevent camera from jumping if the mouse pointer has moved too far
max_thresh = 10
if delta_yaw > max_thresh or delta_yaw < -max_thresh:
delta_yaw = 0
if delta_pitch > max_thresh or delta_pitch < -max_thresh:
delta_pitch = 0
cam.yaw -= delta_yaw
cam.pitch -= delta_pitch
# reset mouse to center
props = base.win.get_properties()
base.win.move_pointer(0, int(props.get_x_size() / 2), int(props.get_y_size() / 2))
# Set the player's movement relative to the camera
camera = self.camera.get_component('CAMERA3P').camera
char.movement = base.render.get_relative_vector(camera, self.player_movement)
char.movement.set_z(0)
class GameApp(ShowBase):
def __init__(self):
ShowBase.__init__(self)
blenderpanda.init(self)
self.accept('escape', sys.exit)
self.disableMouse()
self.inputmapper = InputMapper('config/input.conf')
# Setup ECS
self.ecsmanager = ECSManager()
systems = [
components.CharacterSystem(),
components.Camera3PSystem(),
components.PhysicsSystem(),
]
for system in systems:
self.ecsmanager.add_system(system)
#systems[-1].set_debug(self.render, True)
self.template_factory = components.TemplateFactory(self.ecsmanager)
def run_ecs(task):
self.ecsmanager.update(globalClock.get_dt())
return task.cont
self.taskMgr.add(run_ecs, 'ECS')
# Setup initial game state
self.game_state = GameState()
def run_gamestate(task):
self.game_state.update(globalClock.get_dt())
return task.cont
self.taskMgr.add(run_gamestate, 'GameState')
app = GameApp()
app.run()
|
code
|
Hello, hello, hello!!! Another new week has begun and I am sure we are all looking forward to bigger and better things. As for me, I am looking forward to the hubs or her royal highness, lady mother in law winning the lottery as I don't play at all. The one time I played and lost my five dollars, I was sad for five whole days, gosh, that cost me a dollar a day of crying what a waste that was, what a waste that was. Anyway, by the time they eventually win, my five dollars that I lost would have gained so much interest in my imaginary savings account that their entire winnings will be mine haha. Shush, this is our little secret my darling readers as they are not privy to this arrangement.
Now that you and I know my plans, lets talk about today's outfit which I wore last week. If you follow me on insta you would have seen it there already but I had to show you the rest of my cool poses in this maxi dress over jeans. This dress was my very first purchase from Sheinside and I must say it is worth the whole $23.67 American or $33.98 Australian. It is kind of a wrap dress but be sure to wear something underneath as I have done here with the white jeans otherwise you will be at the mercy of the wind blowing it up and showing your knickers, a Marilyn Monroe moment I am sure we do not need when out and about. I think a short skirt or leggings would work perfectly as well.
Thank you so much for stopping by today. I am grateful that you took time out of your busy schedule to read my blog. Let me know you stopped by, talk to me in the comments box so I can come over and say hi too. Enjoy the rest of your week and see you at your blog space. Stay blessed!!!
I think your tummy is cute! It's just little. The print on this dress is very sophisticated. I'm generally not a fan of dresses over pants but you made it work here. It looks amazing in the wind.
As for unexpected Marilyn Monroe moments, that was one of the first things I realized I needed to be careful about when I started using a mobility scooter for my disability. I'd get going and so would my skirt, right up over my chest!
Dear Elsie, you look amazing in this dress/ jeans comination. I love it. Sooo stylish!
I like this combination, something I have not tried yet but now I definitely will. Beautiful photographs!
looove the dress! you look great!
I just love it when women our age show the right amount of skin. I hope that makes sense. Love your dream about someone else winning the lottery. How selfless of you!! I love what you did with the pants and the dress!
I love this look! You look amazing in it, too!
Just followed a link to find your blog. Love, love, love your style. Everything about this outfit is fab. Rock on, baby!
Lovely outfit Elsie, and that printed maxi is gorgeous too. I usually only wear pants under dresses if it's too cold or I don't want to be bothered with tights, lol. You look great! x Happy weekend.
Elsie you look AMAZING!!! I love that dress and how you styled it with white jeans is ingenious. Your hair - I'm in love. Lol @ the 5$. I dont play either except for when we play as a staff at work. Last time we won 17$ lol lol lol.
|
english
|
The design and development of a website entails a great deal of hard work and brain storming. The content, design, the coding, the optimization strategies go a long way in making a website receive plenty of hits. These tasks are an integral part of a webpage design process. A good website can give an organization a sharp edge and will propel itself above its competition. Today, a website is arguably the most important strategic arm of an organization and a profit generating mechanism from a pure business perspective.
A competent website engineer has to follow many checklists and operating standards. These can be background information, page layout and design, browser compatibility, navigation, color and graphics, multimedia, content presentation, functionality, accessibility etc. and many more. Also usage principles like equitable use, flexibility in use, ease in use, perceptible information, tolerance for error, low physical effort, and size and space for use and approach.
• Remember that the term accessibility does not refer to the quality of the content, but how the stream is delivered. The idea is not to change or “dumb down” what's there, but to make it available to more users.
• Graphic design is, by its nature, a visual entity. When evaluating the look-and-feel of Web applications, it's easy to forget that appearance is not more important than accessibility, nor is it less important. An accessible Web site shouldn't be ugly by default.
• Use a Web-tracking software package to collect traffic data. Determine how many users are leaving a page after one impression – possibly an instance where usability can be improved. It's much easier to champion the cause for accessibility if the statistics indicate a worthy debate.
• When remediating an existing Web site, ensure that enough time in the project life cycle has been devoted to accessibility compliance.
• Meet with the technical team to gauge their familiarity with accessibility laws; designers who mention the term “web standards” are always a safe choice.
Also, a good SEO is equally essential to optimize the page traffic and determine the ranking it would get. Good SEO Services can help your website gain a foothold in a competitive market and will guarantee a successful customer conversion rate.
Design a website that does wine reviews like a online wine magazine. This website will be developed in Joomla! and the designer is expected to work together with the developer (of site) to slice up the design in accordance to the Joomla! template. Site development in Joomla! is NOT required.
Search Engine Optimisation (SEO) is the process of increasing quality traffic to a web site through search engines by improving ranking in search engine results. To make your website visible in search engine(s) provide high-quality content on your pages, especially at homepage. This will attract visitors and other webmasters to link to your website.
Usability is the common goal of SEO and serious web designers.
Sometimes it's just a matter of ensuring accessibility to search engines and a pleasant page flow and stream of information to the human user... and you don't need to be very creative with that, unless your creative UI design accomplishes optimal usability.
Web design is the process of designing web pages, web sites and web applications for the web but before creating and uploading a website, it is necessary to take a domain name and hosting space on the world wide web.
|
english
|
\begin{document}
\title[Syntax topologies]{The topology of syntax relations of a formal language}
\author{Vladimir Lapshin}
\makeatletter
\newdimen\whitespaceamount
\settowidth\whitespaceamount{~}
\def\hskip-\whitespaceamount,~{\hskip-\whitespaceamount,~}
\makeatother
\begin{abstract}
The method of constructing of Grothendieck's topology basing on a neighbourhood grammar, defined on the category of syntax diagrams is described in the article. Syntax diagrams of a formal language are the multigraphs with nodes, signed by symbols of the language's alphabet. The neighbourhood grammar allows to select correct syntax diagrams from the set of all syntax diagrams on the given alphabet by mapping an each correct diagram to the cover consisted of the grammar's neighbourhoods. Such the cover gives rise to Grothendieck's topology on category $\ensuremath{Ext(\catDG)}$ of correct syntax diagrams extended by neighbourhoods' diagrams. An each object of category $\ensuremath{Ext(\catDG)}$ may be mapped to the set of meanings (abstract senses) of this syntax construction. So, the contrvariant functor from category $\ensuremath{Ext(\catDG)}$ to category of sets $\ensuremath{\Cat{Sets}}$ is defined. The given category ${\catS}^{{Ext(\catDG)}^{op}}$ likes to be seen as the convenient means to think about relations between syntax and semantic of a formal language. The sheaves of set defined on category $\ensuremath{Ext(\catDG)}$ are the objects of category ${\catS}^{{Ext(\catDG)}^{op}}$ that satisfy of compositionality principle defined in the semantic analysis.
\end{abstract}
\maketitle
\section{Introduction}
The formal language's syntax traditionally is described by using of the notion of grammar. The grammar defined the laws that are the base to build correct syntax constructs from atomic entities (symbols). The method, described in \cite{Lapshin}, allows to universally describe the syntax of a formal language in spite of representation of its texts (linear one or not). The method describes the syntax constructs by using of the notion of syntax diagram. The syntax diagram is the connected multigraph with nodes signed by a formal language's alphabet and ribs can belong to different sorts and represent the syntax relations. The multigraph of a syntax diagram may be directed or not. The main restriction to the multigraphs of syntax diagrams is connectivity. It is possible to select correct syntax diagrams from the set of all syntax diagrams on the given alphabet. The formalism of neighbourhood grammar is used to do this. It may to define the set of subdiagrams for each syntax diagram $D$ as the set of pairs $(D',s)$, where $D'$ is a syntax diagram and $s$~-- inclusion mapping of syntax diagram $D'$ to syntax diagram $D$. The neighbourhood an alphabet's symbol is a syntax diagram, which contains the node, signed by this symbol. This node is named as the center of the neighbourhood. The neighbourhood grammar is a finite family of neighbourhoods defined for each symbol of the alphabet. The syntax diagram is named as correct one if for each its node signed by some symbol of the alphabet it includes some neighbourhood of this symbol. Such the neighbourhood should contain all ribs adjoining to its center, the set of such the ribs is named as the neighbourhood's star. So, there is at least one cover consisted of neighbourhoods for each correct syntax diagram in the given neighbourhood grammar. Such the cover is named as the syntax one. Further, the category $\ensuremath{\Cat{D}}$ of syntax diagrams above the given alphabet will be described. Also, it will be shown how, for the given category of syntax diagrams basing on neighbourhood grammar, define the category of correct syntax diagrams and Grothendieck's topology on it.
\section{Category $\ensuremath{\Cat{D}}$ of syntax diagrams}
Define category $\ensuremath{\Cat{D}}$ of syntax diagrams above the fixed alphabet $A$ and the set of ribs' sorts $S$ as the category, where objects are syntax diagrams with nodes signed by symbol of alphabet $A$ and ribs having sorts from the set $S$. The morphisms of the category $\ensuremath{\Cat{D}}$ are inclusion mappings of diagrams to each other. Because of inclusion mapping is associative and for each diagram there is the identical inclusion mapping of the diagram to itself, $\ensuremath{\Cat{D}}$ is really the category. It makes sense to say about one-node diagram $a$ which does not contain ribs and consists of the single node, signed the given symbol $a$. Such the diagrams are the categorical interpretation of the alphabet. There is also empty diagram, whci does not contain any node and rib. The empty diagram is included to any syntax diagram. The terminal object exists only if the alphabet $A$ consists of the single symbol $a$, then it is the one-node diagram whose node signed by symbol $a$. In the contrary case, the terminal object is ``hashed'' on one-node diagrams of the alphabet's symbols.
Obviously, it does not make sense to say about the category of all syntax diagrams above the given alphabet and the given set of ribs' sorts. The universe of the discussion is too general. It is convenient to say about the category of syntax diagrams above alphabet and ribs' sorts, that satisfy some additional conditions on the structure of nodes and ribs. For example, it is convenient to say not about all diagrams above the alphabet $A=\{a,b\}$, but only that represent chains of symbols if the language of syntax diagrams has linear representation. If $ababa$~-- the chain above alphabet $A$, then it can be represented by the diagram $a \leftarrow b \leftarrow a \leftarrow b \leftarrow a$. The conditions are: each such diagram contains only one node, having only one rib~-- outcoming, one node having only one rib~-- incoming, and all nodes contain exactly two nodes, one incoming and one outcoming. These conditions define the subcategory of category $\ensuremath{\Cat{D}}$, containing all objects of category $\ensuremath{\Cat{D}}$, that satisfy the given conditions. Often, the conditions is the single method to define the needed set of the syntax diagrams. For example, to define the category of derivation trees in Chomsky's generative context-free grammar, it is not enough definition of the family of neighbourhood diagrams, the additional conditions should be defined. Further, $\ensuremath{\Cat{D}}$ will note the subcategory of category $\ensuremath{\Cat{D}}$ above the given alphabet and sorts of ribs, which satisfies the given conditions on view of syntax relations. In this sense, the category $\ensuremath{\Cat{D}}$ is described by using of two complementary methods: globally, by defining the conditions on the view of diagrams, and locally, by defining the neighbourhoods of the symbols.
\section{Category $\ensuremath{\Cat{D}}G$ of correct syntax diagrams}
As it is already been said above, the described formalism defines the syntax of a language by using of two methods:
\begin{enumerate}
\item Globally~-- by enumerating the conditions on view of the multigraphs of syntax diagrams of the given language.
\item Locally~--- by enumerating the family of neighbourhoods for each symbol of the given language's alphabet.
\end{enumerate}
Such the description may be done in several steps. At the first step, the conditions on view of the syntax diagrams are defined and so, category $\ensuremath{\Cat{D}}$ of all syntax diagrams satisfy the given conditions is described. At the second step, from the set of diagrams constructed on the first step, correct syntax diagrams are selected as the diagram satisfied by local syntax characteristics of the language. The correct syntax diagram is the object of category $\ensuremath{\Cat{D}}$, for which it exists the syntax cover of neighbourhoods of the given grammar. The syntax cover is a collection of neighbourhoods given for each diagram's node. And, if the node is signed by symbol $a$, then the element of syntax cover, which is defined for this node, should belong to the family $G_a$ of neighbourhoods of symbol $a$ of the grammar $G$. Thus, the syntax cover may be noted as the list of pairs $(v,D_a)$, where $v$~-- the node of the diagram and $D_a$~-- the neighbourhood of symbol $a$, which signs the node $v$. The category $\ensuremath{\Cat{D}}G$ of correct syntax diagrams of category $\ensuremath{\Cat{D}}$ is the category of pairs $(D,P)$, where $D \in Ob(\ensuremath{\Cat{D}})$~-- syntax diagram and $P$~-- its syntax cover. The morphisms of category $\ensuremath{\Cat{D}}G$ are the inclusion mappings of diagrams, satisfying the condition that for each node of the subdiagram its neighbourhood, as element of the syntax cover, should be the element of syntax cover of the enveloping diagram. So that, the neighbourhoods should be identically mapped as elements of syntax covers. Obviously, this is the general case, but there may be the exceptions. It can be when there is the correct syntax diagram, which has two different syntax covers (ambiguity). To correctly say about diagrams of such the kind, it is needed to think about a pair (diagram, syntax cover) as about the single object what has been done above. Go to the formal definitions.
\begin{definition}\label{syn_cat}
Let $G=\{G_a : a \in A\}$~-- neighbourhood grammar defined on category $\ensuremath{\Cat{D}}$. Define category $\ensuremath{\Cat{D}}G$ of correct syntax diagrams, given by the grammar $G$ on the category $\ensuremath{\Cat{D}}$, as follows:
\begin{itemize}
\item Objects of category $\ensuremath{\Cat{D}}G$ are the pairs $(D,P)$, where $D \in Ob(\ensuremath{\Cat{D}})$~-- syntax diagram and $P$~-- its syntax cover. $P$ is the list of pairs $(v,D_a)$, where $v$~-- node of the syntax diagram and $D_a$~-- neighbourhood of symbol $a$, which signs the node $v$, an each node $v$ is in the list $P$ exactly on one occasion.
\item For two correct syntax diagrams $(A,P^A),(B,P^B) \in Ob(\ensuremath{\Cat{D}})$ the set $Hom_{\ensuremath{\Cat{D}}G}((A,P^A),(B,P^B))$ consists of all inclusion mappings $s : A \rightarrow B$ such that for each node $v$ of diagram $A$ neighbourhood of node $s(v)$ in cover $P^B$ is the neighbourhood of node $v$ in cover $P^A$.
\end{itemize}
\end{definition}
It is clear the definition \ref{syn_cat} really define the category. The identity map of such category is the identity inclusion map of a correct syntax diagram to itself.
The category $\ensuremath{\Cat{D}}G$ defines correct syntax diagrams, but further we'll need the extension of this by neighbourhood diagrams. Name this new category as $Ext(\ensuremath{\Cat{D}}G)$, but often will also name it the category of correct syntax diagram.
\begin{definition}\label{ext_syn_cat}
Let $G=\{G_a : a \in A\}$~-- neighbourhood grammar defined on category $\ensuremath{\Cat{D}}$ è $\ensuremath{\Cat{D}}G$~-- category of correct syntax diagrams, defined by the grammar $G$ on category $\ensuremath{\Cat{D}}$. Define extension $Ext(\ensuremath{\Cat{D}}G)$ of category $\ensuremath{\Cat{D}}G$ as follows:
\begin{itemize}
\item Objects of category $Ext(\ensuremath{\Cat{D}}G)$ are the objects of category $\ensuremath{\Cat{D}}G$, and also all neighbourhood diagrams of grammar $G$.
\item Let $A$ and $B$~-- objects of category $Ext(\ensuremath{\Cat{D}}G)$. The set of maps $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B)$ is defined by follows:
\begin{enumerate}
\item If $A$ and $B$~-- correct syntax diagrams, i.e. $A,B \in Ob(Ext(\ensuremath{\Cat{D}}G))$, then $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B) = Hom_{\ensuremath{\Cat{D}}G}(A,B)$.
\item If $A$~-- some neighbourhood $D_a \in G_a$, and $B \in Ob(\ensuremath{\Cat{D}}G)$, then $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B)$ consists of inclusion mappings of elements $(v,D_a) \in P$, where $P$~-- syntax cover of diagram $B$.
\item If $A \in Ob(\ensuremath{\Cat{D}}G)$, and $B$~-- some neighbourhood $D_a \in G_a$, then $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B) = \oslash$.
\item If $A$ and $B$~-- some neighbourhood diagrams, then if $A$ and $B$ be the same neighbourhood $D_a \in G_a$, then $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B) = 1_{D_a}$, where $1_{D_a}$~-- identical inclusion mapping of neighbourhood diagram $D_a$ to itself. In the contrary case $Hom_{Ext(\ensuremath{\Cat{D}}G)}(A,B) = \oslash$.
\end{enumerate}
\end{itemize}
\end{definition}
\begin{proposition}
$Ext(\ensuremath{\Cat{D}}G)$ is a category.
\end{proposition}
\begin{proof}
Indeed, it is enough to show, that enumerated in the definition \ref{ext_syn_cat} morphisms satisfy the axioms of category. By definition, an each object of category $Ext(\ensuremath{\Cat{D}}G)$ has the identical map, it is the inclusion mapping of the diagram to itself. Let $D_a$ be some neighbourhood diagram. It is the object of category $Ext(\ensuremath{\Cat{D}}G)$. There exists only one map with the end on object $D_a$~-- the identical inclusion one. If $s$ is some morphism from $D_a$ to correct diagram $(D,P)$ and $s'$~-- map from diagram $(D,P)$ to correct diagram $(D',P')$, then clear, there exists composition $s' \circ s$ selecting neighbourhood $D_a$ of some node $v$ signed by symbol $a$ in diagram $D'$ by such the way that $D_a$ is the element of syntax cover $P'$. Associativity, clear, is also true. It is interesting, that if there exists some diagram of neighbourhood $D_a$, which is itself the correct diagram $(D_a,P)$ as well, then there different objects for neighbourhood diagram $D_a$ and correct diagram $(D_a,P)$ in category $Ext(\ensuremath{\Cat{D}}G)$.
\end{proof}
\section{Syntax topologies}
Topological space $(X,T)$, defined on set $X$ by topology $T$, may be viewed as the method of selection of open subsets from the set of all subsets of set $X$. An each open subset of $X$ has the cover consisting of open subsets of $X$. The same situation is in the category of syntax diagrams above the given alphabet and satisfying by the given conditions on view of diagrams. But, some neighbourhood grammar is used to select correct diagrams instead the topology. Thus, there is the idea to define a neighbourhood grammar as the topology of a special kind, defined on the category of correct syntax diagrams. But, we'll use extended category $Ext(\ensuremath{\Cat{D}}G)$ as the base for the definition.
Recall that sieve on object $A$ of category $\ensuremath{\Cat{C}}$ is the family of maps $S=\{f : Cod(f)=A\}$ satisfying by following condition: if $f \in S$ and $h : B \rightarrow Dom(f)$, then $fh \in S$. In category $Ext(\ensuremath{\Cat{D}}G)$ each morphism on some object $D$ defined either the correct subdiagram of the object $D$, or the neighbourhood diagram as element of its syntax cover. Because, sieve in category $Ext(\ensuremath{\Cat{D}}G)$ is just the set of correct and neighbourhood subdiagrams of the given diagram, closed by operation of subdiagram's getting from each its object. For example, the sieve may consists of some subdiagram of given diagram and all possible correct and neighbourhood subdiagrams of this subdiagram. These subdiagrams are also the subdiagrams of the given diagram, so the set closed and. clear, is the sieve.
There may be defined Grothendieck's topology on any small category. Recall the definition (\cite{Johnston} def. 0.32).
\begin{definition}
Grothendieck's topology on small category $\ensuremath{\Cat{C}}$ is the function $J$, which maps each object $A$ of category $\ensuremath{\Cat{C}}$ to family $J(A)$ of sieves on the object and satisfying by following conditions:
\begin{enumerate}
\item Maximal sieve $h_A=\{f : Cod(f)=A\}$ belongs to $J(A)$.
\item (Stability axiom) If $S \in J(A)$ and $h : B \rightarrow A$, then sieve $h^*(S)=\{f: Cod(f)=B, hf \in S\}$ belongs to $J(B)$.
\item (Transitivity axiom) If $S \in J(A)$ and $R$~-- any sieve on $A$, such that $h^*(R) \in J(B)$ for all $B \stackrel{h}{\rightarrow} A \in S$, then $R \in J(A)$.
\end{enumerate}
Small category Grothendieck's topology with $J$ named as site.
\end{definition}
The sieves from the families $J$ are named as $J$-covers. Obviously, if a category has fibered products, then Grothendieck's topology is defined by using of so named base~-- the families that give rise to Grothendieck's topology. In our case, it is also possible to define fibered products for objects of category $Ext(\ensuremath{\Cat{D}}G)$, but this does not make big sense in linguistic interpretation. Because the Grothendieck's topology for category $Ext(\ensuremath{\Cat{D}}G)$ will be defined by using of another definition of base~-- for categories, that have not fibered products. Such the base is defined in \cite{MacLane92} p. 156, ex. 3.
\begin{definition}\label{top_base}
Let $\textbf{C}$ be a small category. Define base of Grothendieck's topology on category $\textbf{C}$ as function $K$, which maps each object $A$ of category $\textbf{C}$ to set of morphisms' families having the end on object $A$ (covering $K$-families), satisfied by following conditions:
\begin{enumerate}
\item If $f : B \rightarrow A$~-- isomorphism, then family $\{f : B \rightarrow A\}$ belongs to $K(A)$.
\item (Stability axiom) If $\{f_i : A_i \rightarrow A \: |\: i \in I\} \in K(A)$, then for each morphism $g : B \rightarrow A$ there exists covering $K$-family $\{h_j : B_j \rightarrow B \: |\: j \in I'\} \in K(B)$, such that for each $j$ exists $f_i$, such that $f_i = g \circ h_j$.
\item (Transitivity axiom) If $\{f_i : A_i \rightarrow A \: |\: i \in I\} \in K(A)$ and if for each $i \in I$ there exists family $\{g_{ij} : B_{ij} \rightarrow A_i \: |\: j \in I_i\} \in K(A_i)$, then $\{f_i \circ g_{ij} : B_{ij} \rightarrow A \: |\: i \in I, j \in I_i\} \in K(A)$.
\end{enumerate}
\end{definition}
Now define the base of Grothendieck's topology on category $Ext(\ensuremath{\Cat{D}}G)$.
\begin{definition}\label{syn_base}
Let $\ensuremath{\Cat{D}}=\{A,S,C\}$ be a category of syntax diagrams, $G=\{G_a : a \in A\}$~-- neighbourhood grammar and $Ext(\ensuremath{\Cat{D}}G)$~-- category of correct syntax diagrams. Define the base of Grothendieck's topology as function $K_G$, which maps each object $D$ of category $Ext(\ensuremath{\Cat{D}}G)$ and is satisfied by following conditions:
\begin{enumerate}
\item Family $\{D_I\}$, where $D_I$ is isomorphism, belongs to $K_G(D)$.
\item If $D$~-- correct diagram $(D,P)$, then family of morphisms $\{f_v : D_v \rightarrow D\}$, where $D_v \in P$for each node $v$ of diagram $D$, belongs to $K_G(D)$.
\end{enumerate}
\end{definition}
\begin{proposition}
Function $K_G$, defined in \ref{syn_base}, is the base of some Grothendieck's topology on category $Ext(\ensuremath{\Cat{D}}G)$.
\end{proposition}
\begin{proof}
The first axiom of base of topology definition clear is true for any object of category $Ext(\ensuremath{\Cat{D}}G)$. Show that two other axioms are also true.
{\it Stability axiom.} Let $D$ be an object of category $Ext(\ensuremath{\Cat{D}}G)$, which has nontrivial covering family $K_G(D)$ (i.e. syntax cover). It's clear that a correct syntax diagram $(D,P)$ and family of inclusion maps of syntax cover $P$ on diagram $D$ is element of $K_G(D)$. If $s : D' \rightarrow D$~-- inclusion map of some diagram $D' \in Ob(Ext(\ensuremath{\Cat{D}}G))$ in diagram $D$, then there exists syntax cover $P'$ of diagram $D'$, which defines the object (the pair) $D'$ of category $Ext(\ensuremath{\Cat{D}}G)$. The inclusion maps of neighbourhood diagrams of syntax cover $P'$ are the family on $K_G(D')$. The composition of these maps with map $s$ gives exactly the needed elements of covering family $K_G(D)$.
{\it Transitivity axiom.} If $K_G(D)$~-- covering family of an object $D$ of category $Ext(\ensuremath{\Cat{D}}G)$, then there is only possible covering family on each object $D$, it is trivial one (elements of syntax cover are neighbourhoods). The family of compositions of elements of trivial covers (identical maps of neighbourhoods diagrams) and inclusion mappings of each neighbourhood diagram in object $D$ is the same covering family, i.e. an element of $K_G(D)$.
\end{proof}
The function $K_G$ may be transformed to Grothendieck's topology $J$ on category $Ext(\ensuremath{\Cat{D}}G)$ by the standard way. It is enough to get all possible complements of inclusion maps of covering families of function $K_G(D)$ for each object $D \in Ext(\ensuremath{\Cat{D}}G)$. An each trivial family becames the maximal sieve on object $D$ and an each syntax cover stays the same.
\begin{definition}\label{syn_top}
Let $\ensuremath{\Cat{D}}=\{A,S,C\}$ be a category of syntax diagrams, $G=\{G_a : a \in A\}$~-- neighbourhood grammar and $Ext(\ensuremath{\Cat{D}}G)$~-- the category of correct syntax diagram on category $\ensuremath{\Cat{D}}$. Syntax topology $J_G$ based on neighbourhood grammar $G$ is the Grothendieck's topology, defined on category $Ext(\ensuremath{\Cat{D}}G)$ by the following way:
\begin{itemize}
\item For an each object $D$ of category $\ensuremath{\Cat{D}}G$ $J_G(D)$ contains maximal sieve on object $D$.
\item If $(D,P) \in Ext(\ensuremath{\Cat{D}}G)$~-- correct diagram, then the family of morphisms of elements of the given syntax cover $P$ belongs to $J_G(D)$.
\end{itemize}
\end{definition}
\section{Category ${\catS}^{{Ext(\catDG)}^{op}}$ and sheaves, defined by syntax topologies}
It may be possible to map an each object of category $Ext(\ensuremath{\Cat{D}}G)$ to the set of some its senses (abstract meanings). The abstraction is that one does not interesting what is the concrete element of such the set, but there takes in account that this meaning exists. Even if the syntax diagram does not have any practical sense, such the set can be mapped~-- it is the empty set. The set of meanings may contain potentially infinite number of elements. Because it makes sense to use object of category $\ensuremath{\Cat{Sets}}$ as images of the given map.
The map of each object of category $Ext(\ensuremath{\Cat{D}}G)$ to some set of its meanings burns the contrvariant functor (name this as $F$) from category $Ext(\ensuremath{\Cat{D}}G)$ to category of sets $\ensuremath{\Cat{Sets}}$. Indeed, mapping $F$ is defined on each object of category $Ext(\ensuremath{\Cat{D}}G)$. For each morphism $s : D' \rightarrow D$ objects $D',D \in Ob(Ext(\ensuremath{\Cat{D}}G))$ there is map $F(f) : F(D) \rightarrow F(D')$ of accorded sets of senses, which maps each meaning $m \in F(D)$ of diagram $D$ to the meaning $m' \in F(D')$ of subdiagram $D'$, exactly the sense, which derived by subdiagram $D'$ from the diagram $D$ and meaning $m$. If $D \in Ob(Ext(\ensuremath{\Cat{D}}G))$ and $1_D$ is identical map, then the map $1_{F(D)}$ is clear defined as identical map on the set $F(D)$. It is also not hard to see that $F$ is inversely transitive functor. Thus, it is proven that $F$ is functor $F : {Ext(\ensuremath{\Cat{D}}G)}^{op} \rightarrow \ensuremath{\Cat{Sets}}$. The contrvariant functor from any category to category of sets is named also as subsheaf of sets. So, $F$ is subsheaf of sets on category $Ext(\ensuremath{\Cat{D}}G)$.
An each subsheaf of sets on category $Ext(\ensuremath{\Cat{D}}G)$ may be viewed as the language. So, define the category of languages defined by the given grammar $G$ as category ${\catS}^{{Ext(\catDG)}^{op}}$ of contrvariant functors from $Ext(\ensuremath{\Cat{D}}G)$ to category of sets $\ensuremath{\Cat{Sets}}$. The objects of the category are subsheaves $F : Ext(\ensuremath{\Cat{D}}G) \rightarrow \ensuremath{\Cat{Sets}}$ (i.e. languages defined by that grammar) and morphisms are natural transformations of the languages. Let take a look on the properties of the category.
As it is known, (\cite{MacLane92} chapter 1) category ${\catS}^{{\catC}^{op}}$ of subsheaf of sets on each locally small category $\ensuremath{\Cat{C}}$ (in particular, the category ${\catS}^{{Ext(\catDG)}^{op}}$) is topos, and so:
\begin{itemize}
\item Finitely full and finitely cofull.
\item Has exponential of any two subsheaves.
\item Has the subobjects classifier $1 \stackrel{true}{\rightarrow} \Omega$.
\end{itemize}
Consider what is the subobjects classifier $1 \stackrel{true}{\rightarrow} \Omega$ on category ${\catS}^{{Ext(\catDG)}^{op}}$. According to (\cite{MacLane92} p. 38) subobjects classifier on category subsheaves of sets ${\catS}^{{\catC}^{op}}$ is constructed by following way: an each object $A$ of category $\ensuremath{\Cat{C}}$ mapped to set $\Omega(A)=\{S : S \text{"--- sieve on } A\}$ of all sieves on object $A$. An each morphism $f : A \rightarrow B$ is mapped to the morphism $\Omega(f) : \Omega(B) \rightarrow \Omega(A)$, which maps each sieve $S_B \in \Omega(B)$ on object $B$ to sieve $S_A \in \Omega(A)$ on object $A$ by getting the inverse image of morphism $f$,~-- the set $\Omega(f)(S_B)=\{h : hf \in S_B\}$. Thus, $\Omega(f)(S_B)$ selects that set of subdiagrams of diagram $A$, which are subdiagrams of diagram $B$ and belongs to $S_B$. Functor $\Omega$ classifies subfunctor $S$ of functor $F$ by the following way. Let $m \in F(B)$, there may be two cases:
\begin{enumerate}
\item $m \in S(B)$.
\item $m \notin S(B)$.
\end{enumerate}
The first case means that sense $m$, defined on syntax construct $B$ by functor $F$ is the same as the sense on $B$ given by its subfunctor $S$. In the case, natural transformation ${\chi}^{F}_{S}: F(B) \rightarrow \Omega(B)$ maps element $m$ to maximal sieve $h_B$ on $B$, that means the sense, given the syntax construction by functor $F$, is the same as the sense that gives the subfunctor $S$ on all subdiagrams of $B$. In the second case, there is some meaning of syntax construction $B$, given by functor $F$, which does not give by functor $S$. Then ${\chi}^{F}_{S}(B)(m)$ defines some sieve on $B$, which really is just a maximal subdiagram $A$ of diagram $B$, on which the sense that derived from the sense $m$ is equal to some meaning on $S(B)$. So, the mean of subobjects classifier $\Omega(B)$ in category ${\catS}^{{Ext(\catDG)}^{op}}$ is to select the syntax subconstructions on the given diagram $B$, where the senses given by functor and subfunctor are the same.
It is interesting what are initial and terminal objects in category ${\catS}^{{Ext(\catDG)}^{op}}$. An initial object maps an each syntax construct to empty set of senses. So, it can be seen as really formal language where any syntax construction does not have any meaning. A terminal object maps an each syntax construct to the set consisting of exactly one sense. Such the functor can be interpreted as unambiguity language. The unambiguity language amy be used to select meanings in other languages.
It is needed to define the cases the subsheaves on category $Ext(\ensuremath{\Cat{D}}G)$ are sheaves. The sheaves may be interpreted as languages that satisfy \textit{compositionality principle}, which in our terms can be formulated by the following way:
\begin{definition}
An each sense of the correct syntax diagram is uniquely defined by the senses of all its syntax subconstructions.
\end{definition}
Recall the sheaf definition on Grothendieck's topology (\cite{MacLane92} p. 122):
\begin{definition}\label{sheaf}
Let $(\ensuremath{\Cat{C}},J)$ be a site. Subsheaf $F : {\ensuremath{\Cat{C}}}^{op} \rightarrow \ensuremath{\Cat{Sets}}$ is sheaf, if for each object $A \in Ob(\ensuremath{\Cat{C}})$ and for each sieve $S \in J(A)$ diagram
$$\xymatrix{F(A) \ar[r]^(.35){e} & \prod\limits_{f \in S} F(Dom(f)) \ar@< 3pt>[r]^p \ar@<-3pt>[r]_a & \prod\limits_{f,g} F(Dom(g))}$$
where $Dom(f)=Cod(g)$, and map $e$ is equalizer of $p$ and $a$. Maps $p$ and $a$ are defined as follows:
\begin{itemize}
\item $e={\{x \cdot f\}}_f={\{F(f)(x)\}}_f$. So that, for each $x \in F(A)$ selected the element of product $\prod\limits_{f \in S} F(Dom(f))$, consisted of images $F(f)(x)$.
\item If ${\bf x}={\{x_f\}}_{f \in S}$~-- element of product $\prod\limits_{f \in S} F(Dom(f))$, then ${p({\bf x})}_{f,g}=x_{fg}$ and ${a({\bf x})}_{f,g}=x_f \cdot g$. So that, the map $p$ is defined via images of functor $F$ on compositions $fg$, and map $a$~-- via action of functor $F$, defined by morphism $g$ on elements $x_f$.
\end{itemize}
\end{definition}
In our interpretation, the product $\prod\limits_{f \in S} F(Dom(f))$ is just the collection of senses, selected from each subdiagram $A$, and $e$ maps each sense on diagram $A$ to collection of senses on its subdiagrams. The mapping is defined by subsheaf $F$. And also there is the condition that selected collection of senses, defined on subdiagrams, should be agreed in the sense, that if some meaning $n$ on diagram $A$ is mapped by subsheaf $F$ to meaning $m$ on subdiagram $D$, and meaning $m$ is mapped to meaning $k$ on subdiagram $D'$ of diagram $D$, then meaning $k$ uniquely mapped by functor $F$ to meaning $n$. Subsheaf $F$ is sheaf, if an each family of senses agreed on sieve $S$ has uniquely defined sense on diagram $A$ for each sieve $S \in J(A)$. As $e$ is equalizer, each sense on diagram $A$ uniquely glued from some agreed family on its subdiagrams belong to each sieve $S \in J(A)$.
In Grothendieck's topology, which is defined by some neighbourhood grammar, there are maximum two sieves on each object: maximum sieve and syntax cover for correct diagram. But, to subsheaf to be a sheave, it is enough to each family of senses agreed on syntax cover of a diagram is glued to the uniquely defined sense on this diagram. So, to understand the given subsheaf is sheaf sufficiently only to make sure that each meanings family on the neighbourhood uniquely glued to the sense on the diagram. Indeed, if $D$ is a correct syntax diagram and $D'$~-- its correct subdiagram, then each sense on $D'$ uniquely glued from senses on its neighbourhoods. So, to $F$ be a sheaf it's needed this sense should be mapped to that meaning on diagram $D$, which glued from meanings on its neighbourhoods. So, it is true for following:
\begin{proposition}\label{syn_sheaf}
Let $\ensuremath{\Cat{D}}=\{A,S,C\}$ be a category of syntax diagrams, $G=\{G_a : a \in A\}$~-- neighbourhood grammar, $Ext(\ensuremath{\Cat{D}}G)$~-- category of correct syntax diagrams on category $\ensuremath{\Cat{D}}$ and $J_G$~-- syntax topology defined by grammar $G$. Subsheaf of senses $F$ on category $Ext(\ensuremath{\Cat{D}}G)$ is sheaf if and only if each sense on correct syntax diagram $D$ is uniquely defined by each agreed family of senses on elements of its syntax cover.
\end{proposition}
It is clear that the definition of sheaf on syntax topology is exactly the reformalize of compositionality principle. And more, the definition \ref{syn_sheaf} makes the compositionality principle ``local'', reducing its conditions to be true only on syntax covers.
The subobjects classifier $\Omega$ in category of sheaves of senses on category $Ext(\ensuremath{\Cat{D}}G)$ as usually maps an each object to set of closed in the given syntax topology sieves on the given object. Sieve $S$ on object $A$ is named as closed in Grothendieck's topology $J$, if for each morphism $f : Cod(f) = C$, if $f^*(S) \in J(Dom(f))$, then $f \in S$. So that, if set of morphisms $g$, whose compositions $fg$ with morphism $f$ belong to sieve $S$ are the covering family in $J$, then morphism $f$ also should belong to sieve $S$. In category $Ext(\ensuremath{\Cat{D}}G)$ a sieve is closed if together with each subdiagram it contains and all possible syntax covers of this subdiagram. This is the analog of principal sieve in topology on the sets. The classification of sheaf is doing obviously: an each element $x \in F(C)$ is mapped to set of morphisms $\{f | Cod(f)=C \text{ and } x \cdot f \in P(Dom(f))\}$. This set is the sieve and it is closed one if $P$ is sheaf.
Category ${\catS}^{{Ext(\catDG)}^{op}}$ is an elementary topos (\cite{MacLane92} prop. 4 p. 143). In particularly, ${\catS}^{{Ext(\catDG)}^{op}}$ contains all finite limit and colimits. For example, category ${\catS}^{{Ext(\catDG)}^{op}}$ contains both formal and ambiguity languages, that are, accordingly, initial and terminal objects in this category.
\section{Conclusion}
Category ${\catS}^{{Ext(\catDG)}^{op}}$ likes to see a convenient mathematical tool for studying both syntax and semantic properties of languages. The given in the article connection between neighbourhood grammars and Grothendieck's topologies allows to apply methods of topology and category theory to study the languages. The problems are needed to be study in future like to be follows:
\begin{itemize}
\item Study the cases when it is possible to define neighbourhood grammars basing on an arbitrary Grothendieck's topology on site $({\bf C},J)$.
\item Analyse the syntax complexity of languages basing on the geometrical complexity of theirs syntax diagrams.
\item Study in more details the relations between languages defined by the given neighbourhood grammar as well as the relations between categories of sheaves of senses defined by different neighbourhood grammars.
\end{itemize}
\end{document}
|
math
|
Brushed stainless steel case with a black nato bracelet. Uni-directional rotating bezel. Meteor grey dial with silver-tone hands and dot hour markers. Minute markers around the outer rim. Dial Type: Analog. Luminescent hands and markers. Date display between 4 and 5 o'clock position. Chronograph - sub-dials displaying: three- 60 seconds, 30 minute, and 12 hours. Blancpain Calibre F385 automatic movement with about 50 hours of power reserve. Scratch resistant sapphire crystal. Case size: 43 mm. Case thickness: 14.85 mm. Round case shape. Band width: 23 mm. Water resistant at 30 meters / 100 feet. Functions: chronograph, column wheel, flyback, date, hour, minute, small second. Luxury watch style. Watch label: Swiss Made. Blancpain Fifty Fathoms Chronograph Automatic Mens Watch 5200-1110-NABA.
|
english
|
During the past 12 years, cancer had attacked me three times; breast cancer, uterus cancer first and then liver cancer two years ago. Physical sufferings and great fear were part and parcel of my daily life and made me feel that indeed my life would be short.
To what degree could the most advance examination techniques, operations and chemical treatments of modern medicine control my disease? The side effects were so unbearable sometimes that I even did not want to live anymore. One of my father’s friends, who is a doctor of traditional Chinese medicine, instructed me on how to combine western medicine with traditional Chinese medicine to derive the best results against cancer. With the help of my father and that doctor, I began to take traditional Chinese medicine, an anti-cancer medicine called “Tian Xian Liquid”, which was developed by means of modern technology.
Tian Xian liquid not only relieved me of the suffering from side effects caused by western medicine, such as vomiting, difficulty in swallowing, and hair loss. Besides, I was gradually recovering my strength, which inspired me to hope for a longer life. The doctor in charge and friends were quite amazed at my robust appearance. Tian Xian liquid really helped me defeat cancer.
Anyway, I’d like to thank the doctors and nurses who tried their best to save me. It is with their help and support that I am able to see light in the darkness once again and pick up my writing.
Recalling my struggle against cancer, I feel as if I were a blade of grass braving the strong wind and courageously hanging on, never receding in the face of hardships and growing robustly in the warm sunshine. Life to me is not only important but also more beautiful.
Now, I do not live only for survival. I’d like to encourage all patients to go through the difficulty with cheer.
|
english
|
इंडस में नॉकरियों के लिए इंटरवियू
एक बार माता लक्ष्मी भगवान विष्णु को भोजन करा रही थीं, भगवान विष्णु ने पहला ग्रास मुंह में लेने से पहले ही हाथ रोक लिया और उठकर चले गए। कुछ देर बाद लौटकर आए और भोजन किया। इस पर लक्ष्मी जी ने भगवान से भोजन के बीच में उठकर जाने का कारण पूछा। भगवान विष्णुजी ने बड़े प्रेम से कहा- मेरे चार भक्त भूखे थे, उन्हें खिलाकर आया हूं।
लक्ष्मी जी को थोड़ा अजीब सा लगा, उन्होंने विष्णु जी की परीक्षा लेने के लिए दूसरे दिन एक छोटी डिबिया में पांच चींटियों को बंद कर दिया। उसके कुछ देर बाद उन्होंने भगवान के लिए भोजन परोसा। प्रभु ने खूब मन से भोजन ग्रहण किया। आखिर में लक्ष्मी जी बोलीं, आज आपके पांच भक्त भूखे हैं और आपने भोजन ग्रहण कर लिया?
प्रभु ने कहा, ऐसा हो ही नहीं सकता, मेरे सब भक्तों को भोजन मिल चुका है। यह सुन लक्ष्मी जी मुस्कुरा पड़ीं और पूरे आत्मविश्वास से भगवान को चींटियों वाली डिब्बी खोलकर दिखाई। डिब्बी देखकर भगवान विष्णु मुस्कुरा उठे और देवी लक्ष्मी हतप्रभ रह गईं कि डिब्बी में बंद चींटियों के मुंह में चावल के कण थे। लक्ष्मीजी ने पूछा, बंद डिबिया में चावल कैसे आए, प्रभु यह आपने कब डाले?
विष्णु जी ने सुंदर जवाब दिया, देवी आपने चिंटियों को डिब्बी में बंद करते समय जब उनसे क्षमा मांगने के लिए माथा टेका था तभी आपके तिलक से एक चावल डिब्बी में गिर गया था और चींटियों को उनका भोजन मिल गया।
यह भी पढ़ें आंगन में रंगोली बनाने पर आती हैं सुख-समृद्धि
सराज में खुलेगा फार्मेसी कॉलेज
|
hindi
|
भारतीय टीम के कोच और सपोर्ट स्टाफ के चयन के लिए तोड़े जा रहे हैं बीसीसीआई के नियम थेडेस्वाज
भारतीय टीम के कोच और सपोर्ट स्टाफ के चयन के लिए तोड़े जा रहे हैं बीसीसीआई के नियम
भारतीय टीम के कोचिंग स्टाफ और सपोर्ट स्टाफ के चयन के लिए जो कमिटी गठित की गई है, वह बीसीसीआई के नियम तोड़ रही है. सीओए ने कई तथ्योंं पर बिना चर्चा किए ही भारतीय टीम के हेड कोच समेत बाकी सपोर्ट स्टाफ को चुनने के लिए तत्परता दिखाई. शुक्रवार को नई दिल्ली में सीओए की मीटिंग हुई, जिसमें विनोद राय की अगुवाई वाली कमेटी ने कपिल देव, अंशुमन गायकवाड और शांता रंगास्वामी वाले एक एड-हॉक पैनल को गठित किया, जो भारतीय टीम के हेड को समेत अन्य सपोर्ट स्टाफ का चयन करेगा.
भारतीय टीम के कोच और सपोर्ट स्टाफ के लिए आवेदन जारी कर दिए गए हैं, जिसकी अंतिम तिथि ३० जुलाई है. ऐसी ही एक कमेटी सीएसी है, जिसे पिछले साल दिसंबर में महिला क्रिकेट टीम का हेड कोच चुनने का जिम्मा मिला था. एक विवादास्पद तरीके से व्व रमन को महिला क्रिकेट टीम का मुख्य को चुना गया. इस मामले की समीक्षा बीसीसीआई के ऑफिसर जस्टिस डीके जैन करेंगे.
नए बीसीसीआई के संविधान के मुताबिक, सीओए को काउंसिल की कार्यप्रणाली से डिस्चार्ज कर दिया गया है, जिसके पास सीएसी के गठन का अधिकार नहीं है, क्योंकि यह बीसीसीआई की जनरल बॉडी के साथ जुड़ी हुई है. सीएसी को ही टीम के कोच और सपोर्ट स्टाफ की नियुक्ति करने का अधिकार है. बीसीसीआई के एक अधिकारी ने इसी संबंध में मिड-डे को बताया कि इसे एड-हॉक पैनल नाम दिया है, जो सीएसी की तरह है. लेकिन यह नई पैकेजिंग में पुराने सामान बेचने जैसा है.
अधिकारी ने यह भी कहा कि नए संविधान के मुताबिक सीओए ने बीसीसीआई के अधिकारी को भी टीम के सिलेक्शन की मीटिंग में शामिल होने की अनुमति नहीं दी, जो बीसीसीआई के नियम के विरुद्ध है. इस तरह का पैनल नियुक्त करने का अधिकार सीओए को नहीं है. लेकिन उसने ऐसा किया है जिससे पता चलता है कि उसने बीसीसीआई के नियमों की धज्जियां उड़ाई हैं.
आपकी पेंशन में हो सकती है बढ़ोतरी, क्या आपको पता है इस नियम के बारे में
|
hindi
|
مےٚ چھ اکھ سٲتھی یس مےٚ سۭتۍ موجود چھ
|
kashmiri
|
रोमांस में उम्र का फ़ासला - सफल या असफल? | लव मैटर्स
द्वारा सरह अगस्त ९, ०१:०३ बजे
ये रिश्ते इतने भी असामान्य नहीं हैं
अगर हम ध्यान से देखेंगे तो पाएंगे कि इस तरह के रिश्तों के उदाहरण हमारे अपने परिवारों में ही मिल जाएंगे। क्या आपने कभी अपने माता-पिता या अपने दादा-दादी, नाना-नानी के बीच उम्र के अंतर के बारे में ध्यान दिया है? बहुत मुमकिन है कि उनकी उम्र के बीच का फ़ासला आपकी सोच से बहुत ज़्यादा हो।
इस तरह के रिश्तों में रहने वाले लोगों को अक्सर उनके दोस्तों, परिवार और पूरे समाज में शक की नज़र से देखा जाता है। कुछ लोग सोचते हैं कि इस तरह के रिश्तों में आगे चलकर बर्बादी निश्चित है और ख़ासकर तब जब महिला पुरुष से उम्र में बड़ी हो। प्रियंका चोपड़ा की उम्र ३७ साल है जबकि निक जोनास की २६ है और उनके ब्रेकअप की अफ़वाह उनकी पहली डेट से ही उड़ने लगी थीं।
आम सोच के ख़िलाफ़
आम लोगों के मन में यह सवाल अक्सर उठता है कि क्या यह रिश्ते लम्बे चल पाएंगे या इनमे रह रहे लोग एक दूसरे के साथ खुश रह पाएंगेई क्या यह मायने रखता है कि महिला उम्र में बड़ी है या पुरुष? अमेरिका की एक रिसर्च टीम ने बीड़ा उठाया कि ऐसे और इस तरह के और सवालों के जवाब ढूंढ सकेंई
शोधकर्ताओं ने लगभग २०० हेट्रोसेक्सुअल (विपरीत लैंगिक रुझान वाली) महिलाओं से बात की जिनमें से आधी महिलाओं के रिश्तों में उम्र का फ़ासला था। ऐसी महिलाओं की संख्या, जिनके पार्टनर की उम्र उनसे ज़्यादा थी, उन महिलाओं की संख्या के लगभग बराबर थी जिनके पार्टनर उम्र में उनसे छोटे थे।
शोधकर्ता यह जानना चाहते थे कि किस तरह से महिलाएं अपने पार्टनर के प्रति प्रतिबद्ध (कमिटेड) हैं और किस तरह से ये महिलायें अपने रिश्ते में संतुष्ट हैं। इन महिलाओ से यह भी पूछा गया गया कि जब उन्होंने अपने से उम्र में बहुत बड़े या बहुत छोटे व्यक्ति को अपना जीवनसाथी चुना तो क्या उनके परिवार और दोस्तों ने उनके इस फ़ैसले में उनका साथ दिया?
शोध में पाया गया कि जब इस तरह के रिश्तों में महिला की उम्र ज़्यादा होती है तो रिश्तों में ज्यादा संतुष्टि और प्रतिबद्धता हासिल होती है। यह तथ्य निश्चित तौर पर प्रचलित मान्यता के विपरीत है और उन लोगों के लिए ख़ुशख़बरी है जिनके रिश्तों में उम्र का फ़ासला अधिक है।
रोमियो और जूलिएट की तरह
अब आप सोच रहे होंगे कि इसके क्या कारन हो सकते हैंई चलिए शोधकर्ताओं ने आपकी यह मुश्किल आसान कर दी हैई उनकी माने तो कारण है कम ईर्ष्या और जलन जब एक रिश्ते में एक महिला की उम्र ज़्यादा होती हैई
लेकिन रिश्तों में महिला की उम्र ज़्यादा होने पर इसके कामयाब होने की एक वज़ह शक और नापसंदगी भी हो सकती है। इसे शोधकर्ताओं ने "रोमियो और जूलिएट प्रभाव" कहा है। इसमें दूसरों की नापसंदगी के बावज़ूद अपने पार्टनर के प्रति लगाव मजबूत होता जाता है।
अब यह तो निश्चित है कि इस तरह के जोड़ों को अस्वीकार्यता या नापसंदगी का सामना करना पड़ता है। शोध में यह भी पाया गया है कि उम्र में अंतर वाले रिश्तों में महिलाओं को अपने परिवार और दोस्तों से कम सहयोग मिलता है और यह तब और भी कम हो जाता है जब महिला उम्र में बड़ी हो।
हर घड़ी में साथ देना
हालांकि उम्र में फ़ासले वाले रिश्तों में अस्वीकार्यता का सामना करना पड़ता है लेकिन कुछ मामलों में यह उन रिश्तों से बेहतर होता है जहां उम्र का फ़ासला कम रहता है।
इस शोध से क्या संदेश मिला? यही कि अगर आपमें और आपके पार्टनर में उम्र का फ़ासला ज़्यादा हो तो इसका मतलब यह नहीं है कि आपका यह रिश्ता ज़्यादा दिन नहीं टिकेगा।
भले ही परिवार और दोस्त रोड़े अटकायें और वो भी ख़ासकर तब जब महिला उम्र में ज्यादा बड़ी हो। ऐसी आजमाइश की घड़ी में दोनों को एक दूसरे के साथ खड़े रहना चाहिए।
सन्दर्भ : कमिटमेंट इन ऐज गैप हेट्रोसेक्सुअल रोमांटिक रिलेशनशिप : ए टेस्ट ऑफ़ इवोल्यूशनरी एंड सोशियो कल्चरल प्रेडिक्शन. सायकोलॉजी ऑफ़ वीमेन क्वाटरली, साल २००८ में प्रकाशित
आपके और आपके पार्टनर के बीच उम्र का कितना फ़ासला है? क्या आप इससे जुड़ा कोई और सवाल पूछना चाहते हैं? नीचे टिप्पणी करें या हमारे चर्चा मंच पर एलएम विशेषज्ञों से पूछें। हमारा फेसबुक पेज चेक करना ना भूलें।
मंगल, ०८/२०/२०19 - १०:५० बजे
बुध, ०८/२१/२०१९ - ०९:५१ पूर्वान्ह
शनि, ०८/२४/२०१९ - ०५:१३ बजे
शनि, ०८/२४/२०१९ - ०५:५८ बजे
|
hindi
|
\begin{document}
\pagestyle{plain} \pagenumbering{arabic} \title{f{Covering a cubic graph by 5 perfect matchings}
\begin{abstract}
Berge Conjecture states that every bridgeless cubic graph
has 5 perfect matchings such that each edge is contained in at least
one of them. In this paper, we
show that Berge Conjecture holds for two classes of cubic graphs, cubic graphs with a circuit missing only one vertex and bridgeless cubic graphs with a 2-factor consisting of two circuits. The first part of this result implies that Berge Conjecture holds for hypohamiltonian cubic graphs.
\end{abstract}
\textbf{MSC 2010:} 05C70
\textbf{Keywords:} Berge Conjecture; Fulkerson Conjecture; cubic graph; hypohamiltonian graph
\section{Introduction}
Graphs in this article may contain multiple edges but contain no loops. A {\em $k$-factor} of a graph $G$ is a spanning $k$-regular subgraph of $G$. The set of edges in a 1-factor of a graph $G$ is called a {\em perfect matching} of $G$. A {\em matching} of a graph $G$ is a set of edges in a 1-regular subgraph of $G$. A {\em perfect matching cover} of a graph
$G$ is a set of perfect matchings of $G$ such that each edge of $G$
is contained in at least one member of it. The {\em order} of a
perfect matching cover is the number of perfect matchings in it.
One of the first theorems in graph theory, Petersen's Theorem from 1891
\cite{Peterson}, states that every bridgeless cubic graph has a
perfect matching. By Tutte's Theorem from 1947 \cite{Tutte}, which states that a graph $G$ has a perfect matching if and only if the number of odd components of $G-X$ is not greater than the size of $X$ for all $X\subseteq V(G)$, we can obtain that every edge in a bridgeless cubic graph $G$ is contained in a perfect matching of $G$. This implies that every bridgeless cubic graph has a
perfect matching cover. What is the minimum number $k$ such that
every bridgeless cubic graph has a perfect matching cover of order
$k$? Berge conjectured this number is 5 (unpublished, see e.g.
\cite{Fouquet,Mazzuoccolo}).
\begin{conj}[Berge Conjecture]\label{Berge} Every bridgeless cubic
graph has a perfect matching cover of order at most $5$.
\end{conj}
The following conjecture is attributed to Berge in \cite{Seymour},
and was first published in an paper by Fulkerson \cite{Fulkerson}.
\begin{conj}[Fulkerson Conjecture]\label{Fulkerson} Every bridgeless cubic graph has six perfect matchings such that each edge belongs to
exactly two of them.
\end{conj}
Mazzuoccolo \cite{Mazzuoccolo} proved that Conjectures \ref{Berge} and \ref{Fulkerson}
are equivalent. The equivalence of these two conjectures does not imply that Conjecture \ref{Fulkerson} holds for a given bridgeless cubic graph satisfying Conjecture \ref{Berge}. It is still open question whether this holds.
A cubic graph $G$ is called {\em $3$-edge-colorable} if $G$ has three edge-disjoint perfect matchings. It is trivial that Conjectures \ref{Berge} and \ref{Fulkerson} hold for 3-edge-colorable cubic graphs. Non-3-edge-colorable and cyclically 4-edge-connected cubic graphs with girth at least 5 are called {\em snarks}. Conjecture \ref{Fulkerson} have been verified for some families of snarks, such as flower snarks, Goldberg snarks, generalised Blanu\v{s}a snarks, and Loupekine snarks \cite{Fouquet1,Hao,Karam}.
Besides the above snarks, some families of cubic graphs have been
confirmed to satisfy Conjecture \ref{Berge}. Steffen \cite{Steffen} showed that Conjecture \ref{Berge}
holds for bridgeless cubic graphs which have no nontrivial
3-edge-cuts and have 3 perfect matchings which miss at most 4 edges.
It is proved by Hou et al. \cite{Hou} that every almost Kotzig graph
has a perfect matching cover of order 5. Esperet and Mazzuoccolo \cite{Esperet} showed
that there are infinite cubic graphs of which every perfect matching
cover has order at least 5 and the problem that deciding whether a
bridgeless cubic graph has a perfect matching cover of order at most
4 is NP-complete.
In this paper, we show that Berge Conjecture holds for a cubic graph which has a vertex whose removel results a hamiltonian graph. This implies that Berge Conjecture holds for hypohamiltonian cubic graphs, a class of cubic graphs which was conjectured to satisfy Fulkerson Conjecture by H\"{o}ggkvist \cite{Hoggkvist}. A graph $G$ is called {\em hypohamiltonian} if $G$ itself is not hamiltonian but the removel of any vertex of $G$ results a hamiltonian graph. Chen and Fan \cite{Chen} verified the Fulkerson Conjecture for several known classes of hypohamiltonian graphs in the literatures. Now H\"{o}ggkvist's conjecture is still open.
In this paper, we also show that Berge Conjecture holds for bridgeless cubic graphs with a 2-factor consisting of two circuits. This class of cubic graphs include permutation graphs and permutation graphs include generalized Petersen graphs. Fouquet and Vanherpe \cite{Fouquet} showed that every permutation graph have a perfect matching cover of order 4. It was proved by Castagna, Prins \cite{Castagna} and Watkins \cite{Watkins} that all generalized Petersen graphs but the original Petersen graph are 3-edge-colorable.
\section{A technical lemma}
Some notations will be used in this paper. Let $G$ be a graph with vertex-set $V(G)$ and edge-set $E(G)$. For $X\subseteq V(G)$, we denote by $G[X]$ the subgraph of $G$ induced by $X$ and denote by $G-X$ the subgraph of $G$ induced by $V(G)\backslash X$. For $F\subseteq E(G)$, we denote by $G[F]$ the subgraph induced by $F$ and denote by $G-F$ the subgraph of $G$ with vertex-set $V(G)$ and edge-set $E(G)\backslash F$. For $F_{1},F_{2}\subseteq E(G)$, we denote by $F_{1}\bigtriangleup F_{2}$ the set $(F_{1}\backslash F_{2})\cup(F_{2}\backslash F_{1})$. A path $P$ of length at least 1 in $G$ is called a {\em $F_{1}$-$F_{2}$ alternating} path of $G$ if $E(P)\subseteq F_{1}\cup F_{2}$ and each of $E(P)\cap F_{1}$ and $E(P)\cap F_{2}$ is a matching of $G$. For a positive integer $n$, we denote by $[n]$ the set $\{1,2,\dots,n\}$.
Now we present a technical lemma, which plays a key role in the proof of our main results.
\begin{Lemma}\label{2pm} Let $G$ be a cubic graph which has three
edge-disjoint perfect matchings $M_{1}$, $M_{2}$ and $M_{3}$ such
that both $M_{1}\cup M_{2}$ and $M_{1}\cup M_{3}$ induce hamiltonian
circuits of $G$. Let $F$ be a non-empty subset of $M_{2}$ and $\alpha$ be an edge in $M_{3}$. We have that $G$ has two perfect matchings $M_{4}$ and $M_{5}$ such that
\begin{enumerate}[$(1)$]
\addtolength{\itemsep}{-2ex}
\item either $M_{4}\cap M_{5}\subseteq M_{3}\subseteq M_{4}\cup M_{5}$ or $M_{4}\cap M_{5}\subseteq M_{1}\subseteq M_{4}\cup M_{5}$,
\item $G[M_{4}\cup M_{5}]$ has a circuit $C$ containing $\alpha$ such that $F\cap E(C)\neq\emptyset$ and every circuit different from $C$ in $G[M_{4}\cup M_{5}]$ contains no edges in $F$, and
\item if $M_{1}\subseteq M_{4}\cup M_{5}$, then $G$ has a circuit $C'$ containing $\alpha$ such that $M_{2}\cap E(C)\cap E(C')=\emptyset$, $M_{3}\backslash(M_{4}\cup M_{5})\subseteq M_{3}\backslash E(C')$
and $M_{3}\backslash E(C')$ is a perfect matching of $G-V(C')$.
\end{enumerate}
\end{Lemma}
\begin{proof} We proceed by induction on $|V(G)|$. If $|V(G)|=2$, then $M_{2}$ and $M_{3}$ meet the requirements. So the statement holds for $|V(G)|=2$. Now we suppose $|V(G)|>2$.
Let $C_{1}$ be the circuit containing $\alpha$ in $G[M_{2}\cup M_{3}]$. If $F\subseteq E(C_{1})$, then $M_{2}$ and $M_{3}$ meet the requirements. So we assume $F\backslash E(C_{1})\neq\emptyset$.
Set $E_{1}$:=$(M_{3}\backslash E(C_{1}))\cup(M_{2}\cap E(C_{1}))$. We know that every component of $G-E_{1}$ is a even circuit. Let $C_{2}$ be the circuit containing $\alpha$ in $G-E_{1}$. Let $M_{4}$ and $M_{5}$ be the two edge-disjoint perfect matchings of $G-E_{1}$. We know $M_{4}\cap M_{5}=\emptyset\subseteq M_{1}\subseteq M_{4}\cup M_{5}$, $M_{2}\cap E(C_{2})\cap E(C_{1})=\emptyset$, $M_{3}\backslash(M_{4}\cup M_{5})\subseteq M_{3}\backslash E(C_{1})$ and that $M_{3}\backslash E(C_{1})$ is a perfect matching of $G-V(C_{1})$. If $F\backslash E(C_{1})\subseteq E(C_{2})$, then $M_{4}$ and $M_{5}$ are two perfect matchings of $G$ which meet the requirements. So we assume further $F\backslash(E(C_{1})\cup E(C_{2}))\neq\emptyset$.
Let $C_{3}$ be a circuit in $G-E_{1}$ such that $E(C_{3})\cap(F\backslash(E(C_{1})\cup E(C_{2})))\neq\emptyset$. Set $E_{2}$:=$M_{2}\backslash E(C_{3})$. Let $P_{1,1}$, $P_{1,2}$, $\dots$, $P_{1,t}$ be the (inclusionwise) maximal $M_{1}$-$M_{3}$ alternating paths in $G-E_{2}$ which contain no edges in $C_{3}$. We know $|M_{3}\cap E(P_{1,i})|=|M_{1}\cap E(P_{1,i})|+1$ for each $i\in[t]$. Since $G[M_{1}\cup M_{3}]$ is a hamiltonian circuit of $G$, there is some $s\in[t]$ such that $\alpha\in E(P_{1,s})$. Let $P_{2,1}$, $P_{2,2}$, $\dots$, $P_{2,t}$ be the (inclusionwise) maximal $M_{1}$-$M_{3}$ paths in $C_{3}$. We know $|M_{1}\cap E(P_{2,i})|=|M_{3}\cap E(P_{2,i})|+1$ for each $i\in[t]$. For each $i\in\{1,2\}$ and each $j\in[t]$, let $\beta_{i,j}$ be an edge with the same ends as $P_{i,j}$. Set $M_{6}$:=$\{\beta_{2,j}:j\in[t]\}$, $M_{7}$:=$M_{2}\cap E(C_{3})$ and $M_{8}$:=$\{\beta_{1,j}:j\in[t]\}$. Set $F'$:=$E(C_{3})\cap(F\backslash(E(C_{1})\cup E(C_{2})))$.
We construct a new graph $G'$ with vertex-set $V(G[M_{7}])$ and edge-set $M_{6}\cup M_{7}\cup M_{8}$. From above, we know that $M_{6}\cup M_{7}$ induce a hamiltonian circuit of $G'$. Since $G[M_{1}\cup M_{3}]$ is a hamiltonian circuit of $G$, $M_{6}\cup M_{8}$ induce a hamiltonian circuit of $G'$. As $|V(G')|<|V(G)|$, we know by the induction hypothesis that $G'$ has two perfect matchings $M_{9}$ and $M_{10}$ such that (1) either $M_{9}\cap M_{10}\subseteq M_{8}\subseteq M_{9}\cup M_{10}$ or $M_{9}\cap M_{10}\subseteq M_{6}\subseteq M_{9}\cup M_{10}$, (2) $G'[M_{9}\cup M_{10}]$ has a circuit $C'_{1}$ containing $\beta_{1,s}$ such that $F'\cap E(C'_{1})\neq\emptyset$ and every circuit different from $C'_{1}$ in $G'[M_{9}\cup M_{10}]$ contains no edges in $F'$, and (3) if $M_{6}\subseteq M_{9}\cup M_{10}$, then $G'$ has a circuit $C'_{2}$ containing $\beta_{1,s}$ such that $M_{7}\cap E(C'_{1})\cap E(C'_{2})=\emptyset$, $M_{8}\backslash(M_{9}\cup M_{10})\subseteq M_{8}\backslash E(C'_{2})$ and $M_{8}\backslash E(C'_{2})$ is a perfect matching of $G'-V(C'_{2})$.
Set $E_{3}$:=$\bigcup_{j=1}^{2}(\bigcup_{k\in[t]\ \textrm{s.t.}\ \beta_{j,k}\in M_{9}\bigtriangleup M_{10}}E(P_{j,k}))$. Let $M_{11}$ be $M_{3}$ if $M_{8}\subseteq M_{9}\cup M_{10}$ and be $M_{1}$ if $M_{6}\subseteq M_{9}\cup M_{10}$. Set $M_{12}$:=$E_{3}\bigtriangleup M_{11}$. Noting either $M_{9}\cap M_{10}\subseteq M_{8}\subseteq M_{9}\cup M_{10}$ or $M_{9}\cap M_{10}\subseteq M_{6}\subseteq M_{9}\cup M_{10}$, we have that $M_{12}$ is a perfect matching of $G$ and either $M_{11}\cap M_{12}\subseteq M_{3}\subseteq M_{11}\cup M_{12}$ or $M_{11}\cap M_{12}\subseteq M_{1}\subseteq M_{11}\cup M_{12}$. Let $C_{4}$ be the circuit of $G$ which is obtained from $C'_{1}$ by replacing each edge $\beta_{j,k}$ in $C'_{1}$ by the corresponding path $P_{j,k}$. We can see from the property of $C'_{1}$ that $C_{4}$ is a circuit in $G[M_{11}\cup M_{12}]$ such that $\alpha\in E(C_{4})$, $F\cap E(C_{4})\neq\emptyset$ and every circuit different from $C_{4}$ in $G[M_{11}\cup M_{12}]$ contains no edges in $F$.
Suppose $M_{1}\subseteq M_{11}\cup M_{12}$. We have $M_{6}\subseteq M_{9}\cup M_{10}$. Let $C_{5}$ be the circuit obtained from $C'_{2}$ by replacing each edge $\beta_{j,k}$ in $C'_{2}$ by the corresponding path $P_{j,k}$. As $\beta_{1,s}\in E(C'_{2})$ and $M_{7}\cap E(C'_{1})\cap E(C'_{2})=\emptyset$, we know $\alpha\in E(C_{5})$ and $M_{2}\cap E(C_{4})\cap E(C_{5})=\emptyset$. Since $M_{8}\backslash E(C'_{2})$ is a perfect matching of $G'-V(C'_{2})$, $M_{3}\backslash E(C_{5})$ is a perfect matching of $G-V(C_{5})$. Noting also $M_{9}\cap M_{10}\subseteq M_{6}\subseteq M_{9}\cup M_{10}$ and $M_{8}\backslash(M_{9}\cup M_{10})\subseteq M_{8}\backslash E(C'_{2})$, we have $M_{3}\backslash(M_{11}\cup M_{12})\subseteq M_{3}\backslash E(C_{5})$.
So $M_{11}$, $M_{12}$ are perfect matchings of $G$ which meet the requirements.
\end{proof}
\section{Main results}
In this section, we show that Berge Conjecture holds for a bridgeless cubic graph which has a circuit missing only one vertex or has a 2-factor consisting of two circuit.
\begin{Lemma}\label{3pm} Let $G$ a bridgeless cubic graph with a $2$-factor consisting of two odd circuits $C_{1}$ and $C_{2}$. Let $u_{1}u_{2}$ be
an edge in $G$ with $u_{1}\in V(C_{1})$ and $u_{2}\in V(C_{2})$ and let $M$ be the perfect matching of $G$ such that $u_{1}u_{2}\in M$ and $M\backslash\{u_{1}u_{2}\}\subseteq E(C_{1})\cup E(C_{2})$. For $i=1,2$, let $C_{i+2}$ be the circuit containing $u_{i}$ in $G[E(G)\backslash M]$. Suppose $C_{3}\neq C_{4}$ and that $G$ has a circuit $C$ containing $u_{1}$ such that
\begin{enumerate}[$(1)$]
\addtolength{\itemsep}{-2ex}
\item $(E(C_{1})\backslash M)\backslash E(C)$ is a perfect matching of $C_{1}-(V(C)\cap V(C_{1}))$,
\item $\emptyset\neq E(C)\cap E(C_{2})\subseteq E(C_{2})\backslash M$ and $E(C)\cap E(C_{2})\cap E(C_{4})=\emptyset$, and
\item the paths $Q_{1}$, $Q_{2}$, $\dots$, $Q_{s}$ separated by $E(C)\cap E(C_{2})$ in $C$ satisfy that for each $i\in[s]$, $E(Q_{i})\cap(E(C_{1})\backslash M)$ is a perfect matching of $Q_{i}-(V(Q_{i})\cap V(C_{2}))$ if $u_{1}\notin V(Q_{i})$.
\end{enumerate}
We have that $G$ has $3$ perfect matchings covering all edges in $(E(C_{1})\cup E(C_{2}))\backslash M$.
\end{Lemma}
\begin{proof} Set $M_{1}$:=$(E(C_{1})\cup E(C_{2}))\backslash M$, $M_{2}$:=$M$ and $M_{3}$:=$E(G)\backslash(E(C_{1})\cup E(C_{2}))$. Let $C_{5}$ be the circuit containing $u_{1}$ in $G[E(C)\bigtriangleup E(C_{2})]$. From the properties (1) and (3) of $C$, we know that $(E(C_{1})\cap M_{1})\backslash E(C_{5})$ is a perfect matching of $C_{1}-(V(C_{5})\cap V(C_{1}))$.
Assume $u_{2}\in V(C_{5})$. Then every component of $G[E(C)\bigtriangleup E(C_{2})]$ is an even circuit. So $E(C)\bigtriangleup E(C_{2})$ can be decomposed into two matchings $N_{1}$ and $N_{2}$ of $G$. For $i=4,5$, set $M_{i}$:=$N_{i-3}\cup((E(C_{1})\cap M_{1})\backslash E(C))$. From the property (1) of $C$, we can know that $M_{4}$ and $M_{5}$ are perfect matchings of $G$ and we can see $M_{1}\backslash(M_{4}\cup M_{5})=E(C)\cap E(C_{2})$. So it suffice to show that $E(C)\cap E(C_{2})$ is contained in a perfect matching of $G$. On the other hand, it is easy to see that $G[\{u_{1}u_{2}\}\cup E(C_{1})\cup E(C_{4})]$ has a perfect matching, say $N_{3}$. Noting $E(C)\cap E(C_{2})\cap E(C_{4})=\emptyset$, we have that $N_{3}\cup((E(C_{2})\cap M_{1})\backslash E(C_{4}))$ is a perfect matching of $G$ which contains $E(C)\cap E(C_{2})$.
Next we assume $u_{2}\notin V(C_{5})$. Let $P_{1,1}$, $P_{1,2}$, $\dots$, $P_{1,t}$ be the components of $G[E(C_{5})\cap E(C_{2})]$. We know that for each $i\in[t]$, $P_{1,i}$ is a $M_{1}$-$M_{2}$ alternating path satisfying $|E(P_{1,i})\cap M_{2}|=|E(P_{1,i})\cap M_{1}|+1$. For $i=2,3$, let $P_{i,1}$, $P_{i,2}$, $\dots$, $P_{i,t}$ be the paths in $C_{11-3i}$ which are separated by $P_{1,1}$, $P_{1,2}$, $\dots$, $P_{1,t}$. We may assume $u_{1}\in V(P_{2,1})$ and $u_{2}\in V(P_{3,1})$. We know $P_{2,j}\in\{Q_{1},Q_{2},\dots,Q_{s}\}$ for each $j\in[t]$. For each $i\in\{1,2,3\}$ and each $j\in[t]$, let $\alpha_{i,j}$ be an edge with the same ends as $P_{i,j}$. Set $A_{i}$:=$\{\alpha_{i,j}:j\in[t]\}$ for $i=1,2,3$.
We construct a new graph $G'$ whose vertex-set consists of the ends of edges in $A_{1}$ and edge-set is $A_{1}\cup A_{2}\cup A_{3}$. We know that both $A_{1}\cup A_{2}$ and $A_{1}\cup A_{3}$ induce hamiltonian circuits of $G'$. For $\alpha_{2,1}\in A_{2}$ and $\alpha_{3,1}\in A_{3}$, we have by Lemma \ref{2pm} that $G'$ has two perfect matchings $F_{1}$ and $F_{2}$ such that (1) either $F_{1}\cap F_{2}\subseteq A_{3}\subseteq F_{1}\cup F_{2}$ or $F_{1}\cap F_{2}\subseteq A_{1}\subseteq F_{1}\cup F_{2}$, (2) $G'[F_{1}\cup F_{2}]$ has a circuit $C'_{1}$ containing $\alpha_{3,1}$ and $\alpha_{2,1}$, and (3) if $A_{1}\subseteq F_{1}\cup F_{2}$, then $G'$ has a circuit $C'_{2}$ containing $\alpha_{3,1}$ such that $A_{2}\cap E(C'_{1})\cap E(C'_{2})=\emptyset$, $A_{3}\backslash(F_{1}\cup F_{2})\subseteq A_{3}\backslash E(C'_{2})$ and $A_{3}\backslash E(C'_{2})$ is a perfect matching of $G'-V(C'_{2})$.
Set $E_{1}$:=$\bigcup_{i=1}^{3}(\bigcup_{j\in[t]\ \textrm{s.t.}\ \alpha_{i,j}\in F_{1}\bigtriangleup F_{2}}E(P_{i,j}))$. From above, we can obtain that every component of $G[E_{1}]$ is an even circuit of $G$. Hence $E_{1}$ can be decomposed into two matchings $N_{4}$ and $N_{5}$ of $G$.
Assume $F_{1}\cap F_{2}\subseteq A_{3}\subseteq F_{1}\cup F_{2}$. We have that $A_{3}\cap(F_{1}\bigtriangleup F_{2})$ is a perfect matching of $G'[F_{1}\bigtriangleup F_{2}]$. It follows that $M_{1}\backslash E_{1}$ is a perfect matching of $G-V(G[E_{1}])$. Hence $(M_{1}\backslash E_{1})\cup N_{4}$ and $(M_{1}\backslash E_{1})\cup N_{5}$ are two perfect matchings of $G$ which cover all edges in $M_{1}$
Next we assume $F_{1}\cap F_{2}\subseteq A_{1}\subseteq F_{1}\cup F_{2}$. Set $E_{2}$:=$(\bigcup_{j\in[t]\ \textrm{s.t.}\ \alpha_{1,j}\in F_{1}\cap F_{2}}E(P_{1,j}))\cup(\bigcup_{j\in[t]\ \textrm{s.t.}\ \alpha_{3,j}\in A_{3}\backslash(F_{1}\cup F_{2})}E(P_{3,j}))$. We know $E_{2}\subseteq M_{1}\cup M_{2}$. For $i=6,7$, set $M_{i}$:=$N_{i-2}\cup(E_{2}\cap M_{2})\cup((M_{1}\cap E(C_{1}))\backslash E_{1})$. We can see that $M_{6}$ and $M_{7}$ are perfect matchings of $G$ and we have $M_{1}\backslash(M_{6}\cup M_{7})=E_{2}\cap M_{1}$.
Now we show that $E_{2}\cap M_{1}$ is contained in a perfect matching of $G$. Let $C_{6}$ be the circuit of $G$ which is obtained from $C'_{2}$ by replacing each edge $\alpha_{i,j}$ in $C'_{2}$ by the corresponding path $P_{i,j}$. Noting $\alpha_{2,1}\in E(C'_{1})$, $\alpha_{3,1}\in E(C'_{2})$ and $A_{2}\cap E(C'_{1})\cap E(C'_{2})=\emptyset$, we have $u_{2}\in V(C_{6})$ and $u_{1}\notin V(C_{6})$. Notice that $A_{3}\backslash(F_{1}\cup F_{2})\subseteq A_{3}\backslash E(C'_{2})$ and $A_{3}\backslash E(C'_{2})$ is a perfect matching of $G'-V(C'_{2})$. It follows that $E_{2}\cap M_{1}\subseteq(M_{1}\cap E(C_{2}))\backslash E(C_{6})$, $(M_{1}\cap E(C_{2}))\backslash E(C_{6})$ is a perfect matching of $C_{2}-(V(C_{6})\cap V(C_{2}))$ and $E(C_{6})\backslash M_{1}$ is the perfect matching of $C_{6}-u_{2}$. If $E(C_{6})\cap E(C_{1})=\emptyset$, then $(M_{2}\backslash E(C_{2}))\cup((M_{1}\cap E(C_{2}))\bigtriangleup E(C_{6}))$ is a perfect matching containing $E_{2}\cap M_{1}$ in $G$. So we assume $E(C_{6})\cap E(C_{1})\neq\emptyset$. Then there is a path $T$ from $u_{2}$ to $V(C_{1})$ in $C_{6}$ such that $|V(T)\cap V(C_{1})|=1$. Let $N_{6}$ be the perfect matching of $C_{1}-(V(T)\cap V(C_{1}))$. We know that $N_{6}\cup((M_{1}\cap E(C_{2}))\bigtriangleup E(T))$ is a perfect matching containing $E_{2}\cap M_{1}$ in $G$.
So the edges in $M_{1}$ can be covered by 3 perfect matchings of $G$.
\end{proof}
\begin{thm}\label{5pm2} Let $G$ be a cubic graph. Suppose that $G$ has a vertex $v$ such that $G-v$ has a hamiltonian circuit. Then $G$ has a perfect matching cover of order $5$.
\end{thm}
\begin{proof} Let $C$ be a hamiltonian circuit in $G-v$. Choose a vertex $u$ in $V(C)$ such that $uv\in E(G)$. Let $N_{1}$ be the perfect matching of $C-u$. Set $N_{2}$:=$E(C)\backslash N_{1}$. Let $C_{1}$ ($C_{2}$) be the circuit containing $u$ ($v$) in $G[E(G)\backslash(N_{1}\cup\{uv\})]$. If $C_{1}=C_{2}$, then every circuit in $G[E(G)\backslash(N_{1}\cup\{uv\})]$ has even length, which implies that $G$ is 3-edge-colorable and the statement holds. So we assume $C_{1}\neq C_{2}$. Set $M_{1}$:=$N_{1}\cup\{uv\}$ and $M_{2}$:=$(E(C_{1})\backslash E(C))\cup(E(C_{2})\cap E(C))\cup(E(G)\backslash(E(C)\cup E(C_{1})\cup E(C_{2})))$. We know that $M_{1}$ and $M_{2}$ be two perfect matchings of $G$.
Let $P_{1}$, $P_{2}$, $\dots$, $P_{t}$ be the paths of length at least 1 in $C$, which are separated by $(E(C_{1})\cup E(C_{2}))\cap E(C)$. For each $i\in[t]$, we know that $P_{i}$ is a $N_{1}$-$N_{2}$ alternating path satisfying $|E(P_{i})\cap N_{1}|=|E(P_{i})\cap N_{2}|+1$. For each $i\in[t]$, let $\alpha_{i}$ be an edge with the same ends as $P_{i}$. Let $G'$ be a new graph with vertex-set $V(C_{1})\cup V(C_{2})$ and edge-set $\{uv\}\cup E(C_{1})\cup E(C_{2})\cup \{\alpha_{i}:i\in[t]\}$. We know that $G'$ is a bridgeless cubic graph and $E(C_{1})\cup E(C_{2})$ induces a 2-factor of $G'$. Let $M'$ be the perfect matching of $G'$ such that $uv\in M'$ and $M'\backslash\{uv\}\subseteq E(C_{1})\cup E(C_{2})$. Let $C_{3}$ ($C_{4}$) be the circuit containing $u$ ($v$) in $G'[E(G')\backslash M']$.
Assume $C_{3}=C_{4}$. It implies that $G-M_{2}$ is a 2-factor of $G$ which contains no odd circuits. Hence $G$ is 3-edge-colorable and the statement holds.
Assume $C_{3}\neq C_{4}$. Noting $E(C)\cap E(C_{i})\neq\emptyset$ for $i=1,2$, we have that $G'[(E(C)\cap E(C_{1}))\cup\{\alpha_{i}:i\in[t]\}]$ contains no circuits, which implies $E(C_{3})\cap E(C_{2})\neq\emptyset$. We can see easily that for the perfect matching $M'$ of $G'$, $C_{3}$ is a circuit meeting the requirements (1)-(3) in Lemma \ref{3pm}. By Lemma \ref{3pm}, $G'$ has 3 perfect matchings $M'_{3}$, $M'_{4}$ and $M'_{5}$ which cover all edges in $(E(C_{1})\cup E(C_{2}))\backslash M'$. We know $\{\alpha_{i}:i\in[t]\}\cap M'_{3}\cap M'_{4}\cap M'_{5}=\emptyset$. For $i=3,4,5$, set $M_{i}$:=$(\bigcup_{j\in[t]\ \textrm{s.t.}\ \alpha_{j}\in M'_{i}}(E(P_{j})\cap N_{1}))$ $\cup$ $(\bigcup_{j\in[t]\ \textrm{s.t.}\ \alpha_{j}\notin M'_{i}}(E(P_{j})\cap N_{2}))$ $\cup$ $(M'_{i}\backslash\{\alpha_{j}:j\in[t]\})$. We have that $M_{1}$, $M_{2}$, $M_{3}$, $M_{4}$ and $M_{5}$ are 5 perfect matchings of $G$ which cover all edges of $G$.
\end{proof}
By Theorem \ref{5pm2}, we can obtain immediately that Berge Conjecture holds for cubic hypohamiltonian graphs.
\begin{cor} If $G$ is a hypohamiltonian cubic graph, then $G$ has a perfect matching cover of order $5$.
\end{cor}
\begin{thm}\label{cover} Let $G$ be a bridgeless cubic graph with a $2$-factor consisting of two circuits. Then $G$ has a perfect matching cover of order $5$.
\end{thm}
\begin{proof} We know that $G$ has two vertex-disjoint circuits $C_{1}$ and $C_{2}$ such that $V(G)=V(C_{1})\cup V(C_{2})$. If both $C_{1}$ and $C_{2}$ have even lengths, then $G$ is 3-edge-colorable and the statement holds.
So we assume that both $C_{1}$ and $C_{2}$ have odd lengths. Choose
an edge $u_{1}u_{2}\in E(G)$ with $u_{1}\in V(C_{1})$ and $u_{2}\in
V(C_{2})$. Set $M_{3}$:=$E(G)\backslash(E(C_{1})\cup E(C_{2}))$ and let $M_{2}$ be the perfect matching of
$G$ such that $M_{2}\cap M_{3}=\{u_{1}u_{2}\}$. Set
$M_{1}$:=$(E(C_{1})\cup E(C_{2}))\backslash M_{2}$. For $i=1,2$, let $C_{i+2}$ be the circuit containing $u_{i}$ in $G[E(G)\backslash M_{2}]$. If $C_{3}=C_{4}$, then $G$ is 3-edge-colorable and the statement holds. So we assume further $C_{3}\neq C_{4}$.
Assume $E(C_{3})\cap E(C_{2})\neq\emptyset$. We can see that for the perfect matching $M_{2}$ of $G$, $C_{3}$ meets the requirements (1)-(3) in Lemma \ref{3pm}. By Lemma \ref{3pm}, the edges in $M_{1}$ can be covered by 3 perfect matchings of $G$, which together with $M_{2}$ and $M_{3}$ cover all edges of $G$.
So we assume $E(C_{3})\cap E(C_{2})=\emptyset$. Similarly, we can also assume $E(C_{1})\cap E(C_{4})=\emptyset$.
Since $G$ is bridgeless, we know $|M_{3}|\geq3$. It follows that there is a circuit $C_{5}$ in $G[E(G)\backslash M_{2}]$ such that $E(C_{5})\cap E(C_{1})\neq\emptyset$ and $E(C_{5})\cap E(C_{2})\neq\emptyset$. We know $V(C_{5})\cap V(C_{i})=\emptyset$ for $i=3,4$. Let $Q$ be the (inclusionwise) maximal path containing $u_{1}$ in $C_{1}$ such that $E(Q)\cap E(C_{5})=\emptyset$.
\vskip 2mm
\noindent \textbf{Claim 1.} \emph{$G$ has a perfect matching containing $(M_{1}\cap E(C_{1}))\backslash(E(Q)\cup E(C_{5}))$.}
\vskip 2mm
Set $E_{1}$:=$(M_{1}\cap E(C_{1}))\backslash(E(Q)\cup E(C_{5}))$. Let $u_{3}$ and $u_{4}$ be the ends of $Q$. For $i=1,2$, let $\beta_{i}$ be the edge incident to $u_{i+2}$ in $C_{5}$. For $i=1,2$, let $T_{i}$ be the path from $u_{i+2}$ to $V(C_{2})\cup\{u_{5-i}\}$ in $C_{5}$ such that $\beta_{i}\in E(T_{i})$ and $|V(T_{i})\cap(V(C_{2})\cup\{u_{5-i}\})|=1$. For $i=1,2$, let $u_{i+4}$ be the end of $T_{i}$ which is different from $u_{i+2}$.
Assume $u_{5}\in V(C_{2})$ or $u_{6}\in V(C_{2})$. Without loss of generality, we assume $u_{5}\in V(C_{2})$. Let $T_{3}$ be the path from $u_{1}$ to $u_{3}$ in $Q$. Let $N_{1}$ be the perfect matching of $C_{2}-u_{5}$. Then $((M_{1}\cap E(C_{1}))\bigtriangleup(E(T_{1})\cup E(T_{3})))\cup N_{1}$ is a perfect matching containing $E_{1}$ in $G$.
Assume $u_{5}=u_{4}$. If $\beta_{2}\in E(T_{1})$, then $(M_{2}\backslash E(C_{1}))\cup((M_{1}\cap E(C_{1}))\bigtriangleup(E(Q)\cup E(T_{1})))$ is a perfect matching containing $E_{1}$ in $G$. So we assume $\beta_{2}\notin E(T_{2})$. Noting $E(C_{5})\cap E(C_{2})\neq\emptyset$, we have $u_{6}\in V(C_{2})$. This returns to the case we have discussed in the previous paragraph. Claim 1 is proved.
\vskip 2mm
In the following proof, if $P_{i,j}$ is a path of $G$, then let $\alpha_{i,j}$ be an edge with the same ends as $P_{i,j}$.
\vskip 2mm
\noindent \textbf{Claim 2.} \emph{If $G$ has two circuits $C$ and $C'$ such that}
\begin{enumerate}[(1)]
\addtolength{\itemsep}{-2ex}
\item \emph{$u_{1}\in V(C)\cap V(C')$,}
\item \emph{$\emptyset\neq E(C)\cap E(C_{2})\subseteq E(C_{5})\cap E(C_{2})$, $E(C')\cap E(C_{2})\subseteq E(C_{5})\cap E(C_{2})$ and $E(C)\cap E(C')\cap E(C_{2})=\emptyset$,}
\item \emph{the paths $Q_{1}$, $Q_{2}$, $\dots$, $Q_{q}$ separated by $E(C)\cap E(C_{2})$ in $C$ satisfy that $E(Q)\subseteq E(Q_{1})$ and for each $i\in[q]\backslash\{1\}$, $M_{2}\cap E(Q_{i})$ is a perfect matching of $Q_{i}-(V(Q_{i})\cap V(C_{2}))$,}
\item \emph{$G[V(C_{1})]-(V(C)\cap V(C_{1}))$ has two perfect matchings $N_{2}$ and $N_{3}$ satisfying $E(C_{1})\backslash$ $(E(C)\cup N_{2}\cup N_{3})\subseteq(M_{1}\cap E(C_{1}))\backslash E(C')$, and}
\item \emph{$(M_{1}\cap E(C_{1}))\backslash E(C')$ is a perfect matching of $C_{1}-(V(C')\cap V(C_{1}))$ and $E(C')\backslash M_{1}$ is a perfect matching of $C'-u_{1}$,}
\end{enumerate}
\emph{then $G$ has $5$ perfect matchings which cover all edges of $G$.}
\vskip 2mm
Suppose $G$ has such two circuits $C$ and $C'$. Set $D_{1}$:=$E(C)\cap E(C_{2})$. We know $D_{1}\subseteq M_{1}$. Let $P_{1,1}$, $P_{1,2}$, $\dots$, $P_{1,q}$ be the paths in $C_{2}$ which are separated by $D_{1}$. We may assume $u_{2}\in V(P_{1,1})$. For each $i\in[q]$, let $\gamma_{i}$ be an edge with the same ends as $Q_{i}$. Set $D_{2}$:=$\{\alpha_{1,j}:j\in[q]\}$ and $D_{3}$:=$\{\gamma_{j}:j\in[q]\}$. We construct a new graph $G_{1}$ with vertex-set $V(G[D_{1}])$ and edge-set $D_{1}\cup D_{2}\cup D_{3}$. We know that both $D_{1}\cup D_{2}$ and $D_{1}\cup D_{3}$ induce hamiltonian circuits of $G_{1}$. For $\alpha_{1,1}\in D_{2}$ and $\gamma_{1}\in D_{3}$, we have by Lemma \ref{2pm} that $G_{1}$ has two perfect matchings $F_{1}$ and $F_{2}$ such that $G_{1}[F_{1}\cup F_{2}]$ has a circuit $C'_{1}$ containing $\{\alpha_{1,1},\gamma_{1}\}$ and either $F_{1}\cap F_{2}\subseteq D_{3}\subseteq F_{1}\cup F_{2}$ or $F_{1}\cap F_{2}\subseteq D_{1}\subseteq F_{1}\cup F_{2}$.
Set $E_{2}$:=$(D_{1}\cap(F_{1}\bigtriangleup F_{2}))\cup(\bigcup_{j\in[q]\ \textrm{s.t.}\ \alpha_{1,j}\in F_{1}\bigtriangleup F_{2}}E(P_{1,j}))\cup(\bigcup_{j\in[q]\ \textrm{s.t.}\ \gamma_{j}\in F_{1}\bigtriangleup F_{2}}E(Q_{j}))$. From the properties (2), (3) and (4) of $C$, we can see that $Q_{1}$ has even length and $Q_{i}$ has odd length for each $i\in[q]\backslash\{1\}$. Hence we can obtain that every component of $G[E_{2}]$ is an even circuit of $G$ and $E_{2}$ can be decomposed into two matchings $N_{4}$ and $N_{5}$ of $G$.
Assume $F_{1}\cap F_{2}\subseteq D_{3}\subseteq F_{1}\cup F_{2}$. Set $N_{6}$:=$(\bigcup_{j\in[q]\ \textrm{s.t.}\ \alpha_{1,j}\in E(G_{1})\backslash(F_{1}\cup F_{2})}$ $(E(P_{1,j})\cap M_{1}))$ $\cup$ $(\bigcup_{j\in[q]\ \textrm{s.t.}}$ $_{\gamma_{j}\in F_{1}\cap F_{2}}(E(Q_{j})\backslash M_{2}))$. We can obtain $(M_{1}\cap(E(C)\cup E(C_{2})))\backslash(E_{2}\cup N_{6})\subseteq D_{1}$ and that $N_{6}$ is a perfect matching of $G[E(C)\cup E(C_{2})]-V(G[E_{2}])$.
For $i=4,5$, set $M_{i}$:=$N_{i-2}\cup N_{i}\cup N_{6}$. Then $M_{4}$ and $M_{5}$ are perfect matchings of $G$ and we have $M_{1}\backslash(M_{4}\cup M_{5})\subseteq(E(C_{1})\backslash(E(C)\cup N_{2}\cup N_{3}))\cup D_{1}$.
From the properties (2), (4) and (5) of $C$ and $C'$, we can obtain that $(M_{1}\bigtriangleup(E(C')\cup E(C_{4})))\cup\{u_{1}u_{2}\}$ is a perfect matching containing $(E(C_{1})\backslash(E(C)\cup N_{2}\cup N_{3}))\cup D_{1}$ in $G$. So the edges $M_{1}$ can be covered by 3 perfect matchings of $G$, which together with $M_{2}$ and $M_{3}$ cover all edges of $G$.
Assume $F_{1}\cap F_{2}\subseteq D_{1}\subseteq F_{1}\cup F_{2}$. Set $N_{7}$:=$(\bigcup_{j\in[q]\ \textrm{s.t.}\ \alpha_{1,j}\in E(G_{1})\backslash(F_{1}\cup F_{2})}(E(P_{1,j})\cap M_{1}))\cup(F_{1}\cap F_{2})\cup(\bigcup_{j\in[q]\ \textrm{s.t.}\ \gamma_{j}\in E(G_{1})\backslash(F_{1}\cup F_{2})}$ $(E(Q_{j})\cap M_{2}))$. We have that $N_{7}$ is a perfect matching of $G[E(C)\cup E(C_{2})]-V(G[E_{2}])$.
For $i=6,7$, set $M_{i}$:=$N_{i-4}\cup N_{i-2}\cup N_{7}$. Then $M_{6}$ and $M_{7}$ are perfect matchings of $G$. We can see $(M_{2}\cap E(C_{1}))\cup(M_{1}\cap E(C_{2}))\subseteq M_{6}\cup M_{7}$. Noting $E(Q)\subseteq E(Q_{1})$ and $\gamma_{1}\in E(C'_{1})$, we have $E(Q)\subseteq E_{2}\subseteq M_{6}\cup M_{7}$. Now we have $E(G)\backslash(M_{3}\cup M_{6}\cup M_{7})\subseteq((M_{1}\cap E(C_{1}))\backslash E(Q))\cup(M_{2}\cap E(C_{2}))$.
Set $M_{8}$:=$((M_{1}\cap E(C_{1}))\bigtriangleup E(C_{3}))\cup(M_{2}\backslash E(C_{1}))$. We know $(E(C_{5})\cap E(C_{1}))\cup(M_{2}\cap E(C_{2}))\subseteq M_{8}$. By Claim 1, $G$ has a perfect matching $M_{9}$ containing $(M_{1}\cap E(C_{1}))\backslash(E(Q)\cup E(C_{5}))$. Now we have $(M_{1}\cap E(C_{1}))\backslash E(Q)\subseteq M_{8}\cup M_{9}$. So $M_{3}$, $M_{6}$, $M_{7}$, $M_{8}$ and $M_{9}$ are 5 perfect matchings of $G$ which cover all edges of $G$. Claim 2 is proved.
\vskip 2mm
Let $C_{6}$ be the circuit containing $u_{1}$ in $G[E(C_{1})\bigtriangleup E(C_{5})]$. Assume that every circuit different from $C_{6}$ in $G[E(C_{1})\bigtriangleup E(C_{5})]$ contains no edges in $C_{2}$. Then $E(C_{6})\cap E(C_{2})\neq\emptyset$. Let $Q'_{1}$, $Q'_{2}$, $\dots$, $Q'_{q'}$ be the paths separated by $E(C_{6})\cap E(C_{2})$ in $C_{6}$ such that $u_{1}\in V(Q'_{1})$. We can easily check that $C_{6}$ and $C_{3}$ meet the requirements (1)-(5) in Claim 2. So we know by Claim 2 that $G$ has 5 perfect matchings which cover all edges of $G$. Next we assume that there is a circuit $C_{7}$ different from $C_{6}$ in $G[E(C_{1})\bigtriangleup E(C_{5})]$ such that $E(C_{7})\cap E(C_{2})\neq\emptyset$.
Let $P_{2,1}$, $P_{2,2}$, $\dots$, $P_{2,p}$ be the components in $G[E(C_{7})\cap E(C_{1})]$. We know that for each $i\in[p]$, $P_{2,i}$ is a $M_{2}$-$M_{1}$ alternating path satisfying $|E(P_{2,i})\cap M_{2}|=|E(P_{2,i})\cap M_{1}|+1$. For $i=3,4$, let $P_{i,1}$, $P_{i,2}$, $\dots$, $P_{i,p}$ be the paths in $C_{25-6i}$ which are separated by $P_{2,1}$, $P_{2,2}$, $\dots$, $P_{2,p}$. We may assume $u_{1}\in V(P_{4,1})$. Set $B_{i}$:=$\{\alpha_{i+1,j}:j\in[p]\}$ for $i=1,2,3$. Now we construct a new graph $G_{2}$ whose vertex-set consists of the ends of edges in $B_{1}$ and edge-set is $B_{1}\cup B_{2}\cup B_{3}$. We know that both $B_{1}\cup B_{2}$ and $B_{1}\cup B_{3}$ induce hamiltonian circuits of $G_{2}$. Set $B'_{2}$:=$\{\alpha_{3,j}\in B_{2}:E(P_{3,j})\cap E(C_{2})\neq\emptyset\}$.
For $B'_{2}\in B_{2}$ and $\alpha_{4,1}\in B_{3}$, we have by Lemma \ref{2pm} that $G_{2}$ has two perfect matchings $F_{3}$ and $F_{4}$ such that (1) either $F_{3}\cap F_{4}\subseteq B_{3}\subseteq F_{3}\cup F_{4}$ or $F_{3}\cap F_{4}\subseteq B_{1}\subseteq F_{3}\cup F_{4}$, (2) $G_{2}[F_{3}\cup F_{4}]$ has a circuit $C'_{2}$ containing $\alpha_{4,1}$ such that $B'_{2}\cap E(C'_{2})\neq\emptyset$ and every circuit different from $C'_{2}$ in $G_{2}[F_{3}\cup F_{4}]$ contains no edges in $B'_{2}$, and (3) if $B_{1}\subseteq F_{3}\cup F_{4}$, then $G_{2}$ has a circuit $C'_{3}$ containing $\alpha_{4,1}$ such that $B_{2}\cap E(C'_{2})\cap E(C'_{3})=\emptyset$, $B_{3}\backslash(F_{3}\cup F_{4})\subseteq B_{3}\backslash E(C'_{3})$ and $B_{3}\backslash E(C'_{3})$ is a perfect matching of $G_{2}-V(C'_{3})$.
Let $C_{8}$ be the circuit of $G$ which is obtained from $C'_{2}$ by replacing each edge $\alpha_{i,j}$ in $C'_{2}$ by the corresponding path $P_{i,j}$. We know $u_{1}\in V(C_{8})$ and $\emptyset\neq E(C_{8})\cap E(C_{2})\subseteq E(C_{5})\cap E(C_{2})\subseteq M_{1}$. Noting $E(C_{5})\cap E(C_{4})=\emptyset$, we have $E(C_{8})\cap E(C_{2})\cap E(C_{4})=\emptyset$.
Assume $F_{3}\cap F_{4}\subseteq B_{3}\subseteq F_{3}\cup F_{4}$. We can know that $B_{3}\cap E(C'_{2})$ is a perfect matching of $C'_{2}$. It implies that $(M_{1}\cap E(C_{1}))\backslash E(C_{8})$ is a perfect matching of $C_{1}-(V(C_{8})\cap V(C_{1}))$ and the paths $Q''_{1}$, $Q''_{2}$, $\dots$, $Q''_{s'}$ separated by $E(C_{8})\cap E(C_{2})$ in $C_{8}$ satisfy that for each $i\in[s']$, $M_{1}\cap E(Q''_{i})$ is a perfect matching of $Q''_{i}-(V(Q''_{i})\cap V(C_{2}))$ if $u_{1}\notin V(Q''_{i})$. It means that for the perfect matching $M_{2}$ of $G$, $C_{8}$ meets the requirements (1)-(3) in Lemma \ref{3pm}. By Lemma \ref{3pm}, the edges $M_{1}$ can be covered by 3 perfect matchings of $G$, which together with $M_{2}$ and $M_{3}$ cover all edges of $G$.
Assume next $F_{3}\cap F_{4}\subseteq B_{1}\subseteq F_{3}\cup F_{4}$. Set $E_{3}$:=$\bigcup_{i=2}^{4}(\bigcup_{j\in[p]\ \textrm{s.t.}\ \alpha_{i,j}\in F_{3}\bigtriangleup F_{4}}E(P_{i,j}))$. We can know that $(E(C_{1})\backslash E_{3})\cap M_{2}$ is a perfect matching of $C_{1}-V(G[E_{3}])$ and every component of $G[E_{3}\backslash E(C_{8})]$ is an even circuit of $G$. Noting also that every circuit different from $C'_{2}$ in $G_{2}[F_{3}\cup F_{4}]$ contains no edges in $B'_{2}$, we have that $G[V(C_{1})]-(V(C_{8})\cap V(C_{1}))$ has two perfect matchings $N_{8}$ and $N_{9}$ such that $E(C_{1})\backslash(E(C_{8})\cup N_{8}\cup N_{9})=(E(C_{1})\backslash E_{3})\cap M_{1}$.
Let $P_{5,1}$, $P_{5,2}$, $\dots$, $P_{5,r}$ be the paths separated by $E(C_{8})\cap E(C_{2})$ in $C_{8}$ such that $u_{1}\in V(P_{5,1})$. We can see $E(Q)\subseteq E(P_{5,1})$. Since $F_{3}\cap F_{4}\subseteq B_{1}\subseteq F_{3}\cup F_{4}$, $B_{1}\cap E(C'_{2})$ is a perfect matching of $C'_{2}$. It implies that for each $i\in[r]\backslash\{1\}$, $M_{2}\cap E(P_{5,i})$ is a perfect matching of $P_{5,i}-(V(P_{5,i})\cap V(C_{2}))$.
Let $C_{9}$ be the circuit of $G$ which is obtained from $C'_{3}$ by replacing each edge $\alpha_{i,j}$ in $C'_{3}$ by the corresponding path $P_{i,j}$. We know $u_{1}\in V(C_{9})$ and $E(C_{9})\cap E(C_{2})\subseteq E(C_{5})\cap E(C_{2})$. Noting $B_{2}\cap E(C'_{2})\cap E(C'_{3})=\emptyset$, we can obtain $E(C_{8})\cap E(C_{9})\cap E(C_{2})=\emptyset$. Noting $B_{3}\backslash(F_{3}\cup F_{4})\subseteq B_{3}\backslash E(C'_{3})$ and that $B_{3}\backslash E(C'_{3})$ is a perfect matching of $G_{2}-V(C'_{3})$, we can obtain $(E(C_{1})\backslash E_{3})\cap M_{1}\subseteq(M_{1}\cap E(C_{1}))\backslash E(C_{9})$ and that $(M_{1}\cap E(C_{1}))\backslash E(C_{9})$ is a perfect matching of $C_{1}-(V(C_{9})\cap V(C_{1}))$. We also can know that $B_{3}\cap E(C'_{3})$ is a perfect matching of $C'_{3}$. This implies that $E(C_{9})\backslash M_{1}$ is a perfect matching of $C_{9}-u_{1}$.
Now we know that $C_{8}$ and $C_{9}$ are two circuits of $G$ meeting the requirements (1)-(5) in Claim 2. By Claim 2, $G$ has $5$ perfect matchings which cover all edges of $G$.
\end{proof}
\end{document}
|
math
|
A simple project for young sewing fans. Stuff and stitch Olly Owl together to make an adorable soft toy to keep. Kit includes stuffing, owl body, plastic needle and thread. Suitable for ages six and up.
|
english
|
یُتھے واوس اندر پھٔٹ ناو چشمن
|
kashmiri
|
ब्सल ने बढ़ाई र्स १,१88 वाले प्लान की वैलिडिटी, मिलेंगे ये बेनिफिट्स
ब्सल ने अपने र्स ११८८ मारुतम प्रीपेड प्लान की वैलिडिटी को ९० दिन आगे बढ़ा दिया है....
नई दिल्ली, टेक डेस्क। ब्सल ने इस साल जुलाई में अपने यूजर्स को बेहतर सर्विस मुहैया कराने के लिए र्स १,१88 वाला 'मारुतम' प्रीपेड प्लान लॉन्च किया था, इसकी वैलिडिटी ३४५ दिन है। वहीं अब कंपनी ने इस प्लान में वैलिडिटी को ९० दिनों के लिए आगे बढ़ा दिया है। इसके बाद यूजर्स इस प्लान का उपयोग 2१ जनवरी २०२० तक कर सकते हैं। यह कंपनी का प्रमोशनल प्लान है जो कि लॉन्ग टर्म वैलिडिटी की सुविधा प्रदान करता है। इस प्लान को ३४५ दिनों की वैलिडिटी के साथ लॉन्च किया गया था। लेकिन ये प्लान केवल आंध्रप्रदेश और तेलंगाना सर्किल में ही उपलब्ध है। इस प्लान के तहत यूजर्स कुल ५गब डाटा उपयोग कर सकते हैं।
टेलीकॉम टाल्क की रिपोर्ट के अनुसार इस साल जुलाई में लॉन्च किए गए 'मारुतम' प्रामेशनल प्रीपेड प्लान की वैधता २३ अक्टूबर २०१९ थी लेकिन इसे ९० दिनों तक बढ़ाने के बाद अब यह प्लान २१ जनवरी २०२० तक वैलिड होगा। कंपनी ने ये फैसला यूजर्स के बीच इसकी बढ़ती लोकप्रियता को देखते हुए लिया है। इस प्लान में यूजर्स को कुल ५गब डाटा की सुविधा मिलती है।
ब्सल के र्स १,१88 वाले प्रीपेड प्लान के अन्य बेनिफिट्स की बात करें तो इसमें ५गब डाटा खत्म होने के बाद आपको अगल से डाटा प्लान का टॉप-अप लेना होगा। इसके अलावा डाटा समाप्त होने के बाद आपको 2५ पैसे प्रति म्ब के हिसाब से भुगतान करना होगा। वहीं इसमें आपको अनलिमिटेड वॉयस कॉलिंग की भी सुविधा मिलेगी। वहीं अन्य नेटवर्क पर कॉलिंग के लिए आपको 2५0 मिनट मिलेंगे। इस प्लान में 34५ दिनों तक १,२०० स्म्स का लाभ उठा सकते हैं।
ब्सल ने हाल ही में अपने यूजर्स की सुविधा के लिए ४२९ रुपये का प्रीपेड प्लान पेश किया है, इसमें आपको एक्स्ट्रा डाटा का बेनिफिट मिलेगा। ७१ दिनों की वैलिडिटी वाले इस प्लान में यूजर्स प्रतिदिन १गब डाटा का लाभ उठा सकते हैं। इसके अलावा एक्स्ट्रा डाटा के तौर पर इसमें आपको १.५गब डाटा मिलेगा। वहीं कंपनी आने वाले दिनों में इस प्लान में फिर से बदलाव कर सकती है जिसके बाद यूजर्स को डबल डाटा का लाभ मिलेगा।
|
hindi
|
package core
const (
bufferSize = 16
)
// Ray is an internal tranport channel bewteen inbound and outbound connection.
type Ray struct {
Input chan []byte
Output chan []byte
}
func NewRay() *Ray {
return &Ray{
Input: make(chan []byte, bufferSize),
Output: make(chan []byte, bufferSize),
}
}
type OutboundRay interface {
OutboundInput() <-chan []byte
OutboundOutput() chan<- []byte
}
type InboundRay interface {
InboundInput() chan<- []byte
InboundOutput() <-chan []byte
}
func (ray *Ray) OutboundInput() <-chan []byte {
return ray.Input
}
func (ray *Ray) OutboundOutput() chan<- []byte {
return ray.Output
}
func (ray *Ray) InboundInput() chan<- []byte {
return ray.Input
}
func (ray *Ray) InboundOutput() <-chan []byte {
return ray.Output
}
type UDPRay struct {
}
|
code
|
import datetime
import pika
import beaver.transport
from beaver.transport import TransportException
class RabbitmqTransport(beaver.transport.Transport):
def __init__(self, beaver_config, file_config, logger=None):
super(RabbitmqTransport, self).__init__(beaver_config, file_config, logger=logger)
self._rabbitmq_key = beaver_config.get('rabbitmq_key')
self._rabbitmq_exchange = beaver_config.get('rabbitmq_exchange')
# Setup RabbitMQ connection
credentials = pika.PlainCredentials(
beaver_config.get('rabbitmq_username'),
beaver_config.get('rabbitmq_password')
)
parameters = pika.connection.ConnectionParameters(
credentials=credentials,
host=beaver_config.get('rabbitmq_host'),
port=int(beaver_config.get('rabbitmq_port')),
virtual_host=beaver_config.get('rabbitmq_vhost')
)
self._connection = pika.adapters.BlockingConnection(parameters)
self._channel = self._connection.channel()
# Declare RabbitMQ queue and bindings
self._channel.queue_declare(queue=beaver_config.get('rabbitmq_queue'))
self._channel.exchange_declare(
exchange=self._rabbitmq_exchange,
exchange_type=beaver_config.get('rabbitmq_exchange_type'),
durable=bool(beaver_config.get('rabbitmq_exchange_durable'))
)
self._channel.queue_bind(
exchange=self._rabbitmq_exchange,
queue=beaver_config.get('rabbitmq_queue'),
routing_key=self._rabbitmq_key
)
def callback(self, filename, lines):
timestamp = datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%fZ")
for line in lines:
try:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("error")
self._channel.basic_publish(
exchange=self._rabbitmq_exchange,
routing_key=self._rabbitmq_key,
body=self.format(filename, timestamp, line),
properties=pika.BasicProperties(
content_type="text/json",
delivery_mode=1
)
)
except UserWarning:
raise TransportException("Connection appears to have been lost")
except Exception, e:
try:
raise TransportException(e.strerror)
except AttributeError:
raise TransportException("Unspecified exception encountered") # TRAP ALL THE THINGS!
def interrupt(self):
if self._connection:
self._connection.close()
def unhandled(self):
return True
|
code
|
from reportlab.lib.testutils import setOutDir,makeSuiteForClasses, outputfile, printLocation
setOutDir(__name__)
import string, os
import unittest
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfgen.canvas import Canvas
from reportlab.lib import colors
from reportlab.lib.codecharts import KutenRowCodeChart, hBoxText
from reportlab.pdfbase.cidfonts import UnicodeCIDFont, findCMapFile
global VERBOSE
VERBOSE = 0
class KoreanFontTests(unittest.TestCase):
def test0(self):
# if they do not have the font files or encoding, go away quietly
## try:
## from reportlab.pdfbase.cidfonts import CIDFont, findCMapFile
## findCMapFile('KSCms-UHC-H')
## except:
## #don't have the font pack. return silently
## print 'CMap not found'
## return
localFontName = 'HYSMyeongJo-Medium'
c = Canvas(outputfile('test_multibyte_kor.pdf'))
c.setFont('Helvetica', 30)
c.drawString(100,700, 'Korean Font Support')
c.setFont('Helvetica', 10)
c.drawString(100,680, 'Short sample in Unicode; grey area should outline the text with correct width.')
hBoxText(u'\ub300\ud55c\ubbfc\uad6d = Korea',
c, 100, 660, 'HYSMyeongJo-Medium')
hBoxText(u'\uc548\uc131\uae30 = AHN Sung-Gi (Actor)',
c, 100, 640, 'HYGothic-Medium')
## pdfmetrics.registerFont(UnicodeCIDFont('HYSMyeongJo-Medium'))
## c.setFont('Helvetica', 10)
## c.drawString(100,610, "Longer sample From Adobe's Acrobat web page in EUC:")
##
## sample = """\xbf\xad \xbc\xf6 \xbe\xf8\xb4\xc2 \xb9\xae\xbc\xad\xb4\xc2 \xbe\xc6\xb9\xab\xb7\xb1 \xbc\xd2\xbf\xeb\xc0\xcc \xbe\xf8\xbd\xc0\xb4\xcf\xb4\xd9. \xbb\xe7\xbe\xf7 \xb0\xe8\xc8\xb9\xbc\xad, \xbd\xba\xc7\xc1\xb7\xb9\xb5\xe5\xbd\xc3\xc6\xae, \xb1\xd7\xb7\xa1\xc7\xc8\xc0\xcc \xb8\xb9\xc0\xcc \xc6\xf7\xc7\xd4\xb5\xc8 \xbc\xd2\xc3\xa5\xc0\xda \xb6\xc7\xb4\xc2 \xc0\xa5
##\xbb\xe7\xc0\xcc\xc6\xae\xb8\xa6 \xc0\xdb\xbc\xba\xc7\xcf\xb4\xc2 \xb0\xe6\xbf\xec Adobe\xa2\xe7 Acrobat\xa2\xe7 5.0 \xbc\xd2\xc7\xc1\xc6\xae\xbf\xfe\xbe\xee\xb8\xa6 \xbb\xe7\xbf\xeb\xc7\xd8\xbc\xad \xc7\xd8\xb4\xe7 \xb9\xae\xbc\xad\xb8\xa6 Adobe
##Portable Document Format (PDF) \xc6\xc4\xc0\xcf\xb7\xce \xba\xaf\xc8\xaf\xc7\xd2 \xbc\xf6 \xc0\xd6\xbd\xc0\xb4\xcf\xb4\xd9. \xb4\xa9\xb1\xb8\xb3\xaa \xb1\xa4\xb9\xfc\xc0\xa7\xc7\xd1 \xc1\xbe\xb7\xf9\xc0\xc7
##\xc7\xcf\xb5\xe5\xbf\xfe\xbe\xee\xbf\xcd \xbc\xd2\xc7\xc1\xc6\xae\xbf\xfe\xbe\xee\xbf\xa1\xbc\xad \xb9\xae\xbc\xad\xb8\xa6 \xbf\xad \xbc\xf6 \xc0\xd6\xc0\xb8\xb8\xe7 \xb7\xb9\xc0\xcc\xbe\xc6\xbf\xf4, \xc6\xf9\xc6\xae, \xb8\xb5\xc5\xa9, \xc0\xcc\xb9\xcc\xc1\xf6 \xb5\xee\xc0\xbb \xbf\xf8\xba\xbb \xb1\xd7\xb4\xeb\xb7\xce \xc0\xc7\xb5\xb5\xc7\xd1 \xb9\xd9 \xb4\xeb\xb7\xce
##\xc7\xa5\xbd\xc3\xc7\xd2 \xbc\xf6 \xc0\xd6\xbd\xc0\xb4\xcf\xb4\xd9. Acrobat 5.0\xc0\xbb \xbb\xe7\xbf\xeb\xc7\xcf\xbf\xa9 \xc0\xa5 \xba\xea\xb6\xf3\xbf\xec\xc0\xfa\xbf\xa1\xbc\xad \xb9\xae\xbc\xad\xb8\xa6 \xbd\xc2\xc0\xce\xc7\xcf\xb0\xed \xc1\xd6\xbc\xae\xc0\xbb \xc3\xdf\xb0\xa1\xc7\xcf\xb4\xc2 \xb9\xe6\xbd\xc4\xc0\xb8\xb7\xce
##\xb1\xe2\xbe\xf7\xc0\xc7 \xbb\xfd\xbb\xea\xbc\xba\xc0\xbb \xc7\xe2\xbb\xf3\xbd\xc3\xc5\xb3 \xbc\xf6 \xc0\xd6\xbd\xc0\xb4\xcf\xb4\xd9.
##
##\xc0\xfa\xc0\xdb\xb1\xc7 © 2001 Adobe Systems Incorporated. \xb8\xf0\xb5\xe7 \xb1\xc7\xb8\xae\xb0\xa1 \xba\xb8\xc8\xa3\xb5\xcb\xb4\xcf\xb4\xd9.
##\xbb\xe7\xbf\xeb\xc0\xda \xbe\xe0\xb0\xfc
##\xbf\xc2\xb6\xf3\xc0\xce \xbb\xe7\xbf\xeb\xc0\xda \xba\xb8\xc8\xa3 \xb1\xd4\xc1\xa4
##Adobe\xc0\xc7 \xc0\xe5\xbe\xd6\xc0\xda \xc1\xf6\xbf\xf8
##\xbc\xd2\xc7\xc1\xc6\xae\xbf\xfe\xbe\xee \xba\xd2\xb9\xfd \xc0\xcc\xbf\xeb \xb9\xe6\xc1\xf6
##"""
## tx = c.beginText(100,600)
## tx.setFont('HYSMyeongJo-Medium-KSC-EUC-H', 7, 8)
## tx.textLines(sample)
## tx.setFont('Helvetica', 10, 12)
## tx.textLine()
## tx.textLines("""This test document shows Korean output from the Reportlab PDF Library.
## You may use one Korean font, HYSMyeongJo-Medium, and a number of different
## encodings.
##
## The available encoding names (with comments from the PDF specification) are:
## encodings_kor = [
## 'KSC-EUC-H', # KS X 1001:1992 character set, EUC-KR encoding
## 'KSC-EUC-V', # Vertical version of KSC-EUC-H
## 'KSCms-UHC-H', # Microsoft Code Page 949 (lfCharSet 0x81), KS X 1001:1992
## #character set plus 8,822 additional hangul, Unified Hangul
## #Code (UHC) encoding
## 'KSCms-UHC-V', #Vertical version of KSCms-UHC-H
## 'KSCms-UHC-HW-H', #Same as KSCms-UHC-H, but replaces proportional Latin
## # characters with halfwidth forms
## 'KSCms-UHC-HW-V', #Vertical version of KSCms-UHC-HW-H
## 'KSCpc-EUC-H', #Macintosh, KS X 1001:1992 character set with MacOS-KH
## #extensions, Script Manager Code 3
## 'UniKS-UCS2-H', #Unicode (UCS-2) encoding for the Adobe-Korea1 character collection
## 'UniKS-UCS2-V' #Vertical version of UniKS-UCS2-H
## ]
##
## The following pages show all characters in the KS X 1001:1992 standard, using the
## encoding 'KSC-EUC-H' above. More characters (a LOT more) are available if you
## use UHC encoding or the Korean Unicode subset, for which the correct encoding
## names are also listed above.
## """)
##
## c.drawText(tx)
##
## c.setFont('Helvetica',10)
## c.drawCentredString(297, 36, 'Page %d' % c.getPageNumber())
## c.showPage()
##
## # full kuten chart in EUC
## c.setFont('Helvetica', 18)
## c.drawString(72,750, 'Characters available in KS X 1001:1992, EUC encoding')
## y = 600
## for row in range(1, 95):
## KutenRowCodeChart(row, 'HYSMyeongJo-Medium','KSC-EUC-H').drawOn(c, 72, y)
## y = y - 125
## if y < 50:
## c.setFont('Helvetica',10)
## c.drawCentredString(297, 36, 'Page %d' % c.getPageNumber())
## c.showPage()
## y = 700
c.save()
if VERBOSE:
print('saved '+outputfile('test_multibyte_kor.pdf'))
def makeSuite():
return makeSuiteForClasses(KoreanFontTests)
#noruntests
if __name__ == "__main__":
VERBOSE = 1
unittest.TextTestRunner().run(makeSuite())
printLocation()
|
code
|
(۲) اکھ اصل چٔرٕچ ژھٲنٛڈ ہٮ۪ون یتھ منٛز باے بٔل آسَن پَرناوان۔
|
kashmiri
|
// Manipulating JavaScript Objects
// I worked on this challenge: by myself.
// There is a section below where you will write your code.
// DO NOT ALTER THIS OBJECT BY ADDING ANYTHING WITHIN THE CURLY BRACES!
var terah = {
name: "Terah",
age: 32,
height: 66,
weight: 130,
hairColor: "brown",
eyeColor: "brown"
}
// __________________________________________
// Write your code below.
var adam = {
name: "Adam"
}
terah["spouse"] = adam;
terah.weight = 125;
delete terah.eyeColor;
adam["spouse"] = terah
terah["children"] = children = {}
var carson = {
name: "Carson"
}
terah.children["carson"] = carson
var carter = {
name: "Carter"
}
terah.children["carter"] = carter
var colton = {
name: "Colton"
}
terah.children["colton"] = colton
adam["children"] = terah.children
// __________________________________________
// Reflection: Use the reflection guidelines
// What tests did you have trouble passing? What did you do to make it pass? Why did that work?
// I had a terrible time trying to assign the carter object as an object within the children property
// for the terah object. I was trying to base it off of creating the children object within the
// children property, but that wasn't working. Essentially, I was trying to do terah["children"] =
// children["caron"] = caron; all that did was just insert the name property from the carson object,
// but not the object itself. I took a break for a few minutes and just thought about how would I
// call the carson object within the children property of terah? Well, the notation would be:
// terah.children.carson; I then thought, why not use that notation and set it equal to the object
// carson? Well, turns out that worked, although I tweaked the notation to use brackets for the
// property name carson.
// How difficult was it to add and delete properties outside of the object itself?
// Very easy to delete and add. The only issue I had was adding an object within an object that is a
// property of an object; this came up when trying to add the carson object as a property of the
// children object, which is then a property of the terah object. I finally figured out the notation as
// described in my previous response.
// What did you learn about manipulating objects in this challenge?
// I learned how to assign an object as a property within an object, essentially creating nested
// objects, and how to call them. I also learned, through the syntax at the end of the tests, that if
// you create a property in an object that refers to the object itself, JavaScript uses a placeholder
// [Circular] to denote that, rather than displaying the object, which would then lead down a rabbit
// hole of displaying the same object ad infinitum.
// __________________________________________
// Driver Code: Do not alter code below this line.
function assert(test, message, test_number) {
if (!test) {
console.log(test_number + "false");
throw "ERROR: " + message;
}
console.log(test_number + "true");
return true;
}
assert(
(adam instanceof Object),
"The value of adam should be an Object.",
"1. "
)
assert(
(adam.name === "Adam"),
"The value of the adam name property should be 'Adam'.",
"2. "
)
assert(
terah.spouse === adam,
"terah should have a spouse property with the value of the object adam.",
"3. "
)
assert(
terah.weight === 125,
"The terah weight property should be 125.",
"4. "
)
assert(
terah.eyeColor === undefined || null,
"The terah eyeColor property should be deleted.",
"5. "
)
assert(
terah.spouse.spouse === terah,
"Terah's spouse's spouse property should refer back to the terah object.",
"6. "
)
assert(
(terah.children instanceof Object),
"The value of the terah children property should be defined as an Object.",
"7. "
)
assert(
(terah.children.carson instanceof Object),
"carson should be defined as an object and assigned as a child of Terah",
"8. "
)
assert(
terah.children.carson.name === "Carson",
"Terah's children should include an object called carson which has a name property equal to 'Carson'.",
"9. "
)
assert(
(terah.children.carter instanceof Object),
"carter should be defined as an object and assigned as a child of Terah",
"10. "
)
assert(
terah.children.carter.name === "Carter",
"Terah's children should include an object called carter which has a name property equal to 'Carter'.",
"11. "
)
assert(
(terah.children.colton instanceof Object),
"colton should be defined as an object and assigned as a child of Terah",
"12. "
)
assert(
terah.children.colton.name === "Colton",
"Terah's children should include an object called colton which has a name property equal to 'Colton'.",
"13. "
)
assert(
adam.children === terah.children,
"The value of the adam children property should be equal to the value of the terah children property",
"14. "
)
console.log("\nHere is your final terah object:")
console.log(terah)
|
code
|
<?php
/* MyBlogBundle:Page2:edit.html.twig */
class __TwigTemplate_c78cea0ab062f04153862300de6451171983fb4de6fda3a74e0d4574477bc359 extends Twig_Template
{
public function __construct(Twig_Environment $env)
{
parent::__construct($env);
// line 1
try {
$this->parent = $this->env->loadTemplate("::base.html.twig");
} catch (Twig_Error_Loader $e) {
$e->setTemplateFile($this->getTemplateName());
$e->setTemplateLine(1);
throw $e;
}
$this->blocks = array(
'body' => array($this, 'block_body'),
);
}
protected function doGetParent(array $context)
{
return "::base.html.twig";
}
protected function doDisplay(array $context, array $blocks = array())
{
$this->parent->display($context, array_merge($this->blocks, $blocks));
}
// line 3
public function block_body($context, array $blocks = array())
{
// line 4
echo "<h1>Page2 edit</h1>
";
// line 6
echo $this->env->getExtension('form')->renderer->renderBlock((isset($context["edit_form"]) ? $context["edit_form"] : $this->getContext($context, "edit_form")), 'form');
echo "
<ul class=\"record_actions\">
<li>
<a href=\"";
// line 10
echo $this->env->getExtension('routing')->getPath("page2");
echo "\">
Back to the list
</a>
</li>
<li>";
// line 14
echo $this->env->getExtension('form')->renderer->renderBlock((isset($context["delete_form"]) ? $context["delete_form"] : $this->getContext($context, "delete_form")), 'form');
echo "</li>
</ul>
";
}
public function getTemplateName()
{
return "MyBlogBundle:Page2:edit.html.twig";
}
public function isTraitable()
{
return false;
}
public function getDebugInfo()
{
return array ( 57 => 14, 50 => 10, 43 => 6, 39 => 4, 36 => 3, 11 => 1,);
}
}
|
code
|
مغربی بنگال چھُ ہِنٛدوستانَس مَنٛز اَکھ صوٗبہٕ.
|
kashmiri
|
उच्च दक्षता समुद्री जनरेटर - बॉस्गोओ डॉट कॉम
विवरण:पीला डबल असर समुद्री जनरेटर,डबल असर ६० हर्ट्ज मरीन जनरेटर,१८००रम डबल असर समुद्री जनरेटर
होम > उत्पादों > इवोटेक मरीन जेनरेटर > लोअर वोल्टेज मरीन जेनरेटर > उच्च दक्षता समुद्री जनरेटर
एवोटिक जनरेटर का उपयोग भूमि के आधार या मरीन के लिए किया जा सकता है। विशेष एपॉक्सी आधारित राल का उपयोग करके जनरेटर की इन्सुलेशन प्रणाली, सिस्टम जनरेटर के सही घुमावदार इन्सुलेशन सुनिश्चित करता है और हानिकारक गैसयुक्त का उत्सर्जन नहीं करता है। उच्च दक्षता वाले समुद्री जनरेटर, एकल असर वाले समुद्री जनरेटर, ६० हर्ट्ज एकल असर समुद्री जेनरेटर, १८००रम एकल असर समुद्री जनरेटर, वे एवोटिक समुद्री जनरेटर से संबंधित हैं।
पीला डबल असर समुद्री जनरेटर
डबल असर ६० हर्ट्ज मरीन जनरेटर
१८००रम एकल असर समुद्री जनरेटर
लो वोल्टेज ११०व से ६९०व मरीन एसी अल्टरनेटर अब से संपर्क करें
लो पावर मरीन एसी अल्टरनेटर अब से संपर्क करें
तीन चरण तुल्यकालिक समुद्री जनरेटर अब से संपर्क करें
सहायक वाइंडिंग के साथ समुद्री अल्टरनेटर अब से संपर्क करें
समानांतर ऑपरेशन के लिए लोअर वोल्टेज मरीन जेनरेटर अब से संपर्क करें
इप२३ के साथ लो वोल्टेज मरीन जेनरेटर अब से संपर्क करें
उच्च दक्षता समुद्री जनरेटर अब से संपर्क करें
कम वोल्टेज ६०क समुद्री जनरेटर अब से संपर्क करें
पीला डबल असर समुद्री जनरेटर डबल असर ६० हर्ट्ज मरीन जनरेटर १८००रम डबल असर समुद्री जनरेटर ब्लू डबल असर समुद्री जनरेटर लो पावर डबल असर समुद्री जनरेटर १८००रम ब्लू डबल असर समुद्री जनरेटर डबल असर समुद्री जेनरेटर १०००क डबल असर समुद्री जनरेटर
पीला डबल असर समुद्री जनरेटर डबल असर ६० हर्ट्ज मरीन जनरेटर १८००रम डबल असर समुद्री जनरेटर ब्लू डबल असर समुद्री जनरेटर लो पावर डबल असर समुद्री जनरेटर
|
hindi
|
# utils
random useful utilities
|
code
|
Activision has used the official “Call of Duty” Twitter account to post word that the announcement of this year’s entry in the popular gaming series will be made this coming Sunday.
A teaser site that the account links to indicates the announcement will hit on Sunday, May 4th at 10AM US-PST and will include the first trailer for the game which is being produced by Sledgehammer Games. The teaser website links to a series of articles about private military companies and the mercenaries they employ.
Game Informer have also released a brand new image from the game, which is codenamed ‘Blacksmith’, and shows off some soldiers in slightly futuristic armor. The magazine will release its new issue, with twelve pages of exclusive and extensive details on the new game, though its digital store at the same time the COD teaser site goes up.
|
english
|
Home › Recent › 3 Day Lumia Mad Rush Sale!
3 Day Lumia Mad Rush Sale!
You can get up to 70% off on selected Lumia models at Nokia stores and other select stores nationwide.
|
english
|
You will need to do this immediately unless there is an inquest (the certificate is issued after this), there is no cost for the certificate.
If the person died in hospital, the hospital will provide you with this.
If the person has died at home, you should call the person’s Doctor.
All deaths occurring in Scotland must be registered within 8 days of their occurrence. You will need to attend, in person, one of the six Registration Offices within the Stirling Council area to obtain a Death Certificate.
The first Death Certificate is free however, to obtain a copy there is an associated fee.
Once you have registered the death, you can begin to arrange the funeral. The deceased may already have a Funeral Plan in place, or their wishes outlined within a Will or Testament.
It is most likely that you will appoint a Funeral Director, but it’s also possible to arrange the funeral yourself. Advice on what you may need to arrange is on our Arranging a Funeral pages.
As well as letting friends and loved ones know, there are many organisations you need to notify when a person passes away. You should do this as soon as possible after receiving the death certificate.
Some agencies/companies may require you to attend in person, but you should initially call or visit their website where they may have an online form that you can complete. It will be necessary to supply an official copy of the death certificate or return original documents to them, and give details of the Executor or Administrator of the Estate.
Passport Office to cancel their passport. You will be required to return the persons Passport to Her Majesty's Passport Office (HMPO).
HM Revenue & Customs (HMRC) for their taxes.
Department for Work and Pensions (DWP) to stop their State Pension and benefits.
Driver and Vehicle Licensing Agency (DVLA) to cancel their driving license, car tax and car registration documents.
Local Council for their Council Tax, electoral register and other housing benefits.
Public sector or armed forces pension scheme for their pension.
The person who died might have had outstanding debts or payment arrangements with companies that need to be settled. How you sort out the person’s financial affairs will depend on whether or not they have made a Will or Testament.
The persons Landlord or Housing Association. If the person was a joint tenant it will be necessary to arrange a change in tenancy.
Insurance companies (vehicle, home, life etc.).
Social media sites such as Facebook, Instagram, Twitter and Reddit.
Financial or banking applications including PayPal and Amazon.
Email accounts such as Google, Hotmail, and Yahoo.
Media sites such as Netflix, YouTube or Amazon Prime.
|
english
|
"اِٹ از گیٹِنگ لیٹ۔ از کل ہے وۅنۍ ملاقات سہل چھہ۔ پلیز ہفتس منز ٲسۍ زِ اکہ لٹہِ سکایپس پیٹھ یوان۔"
|
kashmiri
|
कुछ सांस्कृतिक और सामाजिक-आर्थिक नीतियों के कारण पुराने समय से किया जा रहा कन्या भ्रूण हत्या एक अनैतिक कार्य है। भारतीय समाज में कन्या भ्रूण हत्या के निम्न कारण हैं:
कन्या भ्रूण हत्या की मुख्य वजह बालिका शिशु पर बालक शिशु की प्राथमिकता है क्योंकि पुत्र आय का मुख्य स्त्रोत होता है जबकि लड़कियां केवल उपभोक्ता के रुप में होती हैं। समाज में ये गलतफहमी है कि लड़के अपने अभिवावक की सेवा करते हैं जबकि लड़कियाँ पराया धन होती है।
दहेज़ व्यवस्था की पुरानी प्रथा भारत में अभिवावकों के सामने एक बड़ी चुनौती है जो लड़कियां पैदा होने से बचने का मुख्य कारण है।
पुरुषवादी भारतीय समाज में महिलाओं की स्थिति निम्न है।
अभिवावक मानते हैं कि पुत्र समाज में उनके नाम को आगे बढ़ायेंगे जबकि लड़कियां केवल घर संभालने के लिये होती हैं।
गैर-कानूनी लिंग परीक्षण और बालिका शिशु की समाप्ति के लिये भारत में दूसरा बड़ा कारण गर्भपात की कानूनी मान्यता है।
तकनीकी उन्नति ने भी कन्या भ्रूण हत्या को बढ़ावा दिया है।
नियंत्रण के लिये प्रभावकारी उपाय:
जैसा कि हम सभी जानते हैं कि महिलाओं के भविष्य के लिये कन्या भ्रूण हत्या एक अपराध और सामाजिक आपदा है। भारतीय समाज में होने कन्या भ्रूण हत्याओं के कारणों का हमें ध्यान देना चाहिये और नियमित तौर पर एक-एक करके सभी को सुलझाना चाहिये। लैंगिक भेदभाव की वजह से ही मुख्यत: कन्या भ्रूण हत्या होती है। इसके ऊपर नियंत्रण के लिये कानूनी शिकंजा होना चाहिये। भारत के सभी नागरिकों द्वारा इससे संबंधित नियमों का कड़ाई से पालन करना चाहिये। और इस क्रूरतम अपराध के लिये किसी को भी गलत पाये जाने पर निश्चित तौर पर सजा मिलनी चाहिये। चिकित्सों के इसमें शामिल होने की स्थिति में उनका स्थायी तौर पर लाइसेंस को रद्द करना चाहिये। गैरकानूनी लिंग परीक्षण और गर्भपात के लिये खासतौर से मेडिकल उपकरणों के विपणन को रोकना चाहिये। उन अभिवावकों को दण्डित करना चाहिये जो अपनी लड़की को मारना चाहते हैं। युवा जोड़ों को जागरुक करने के लिये नियमित अभियान और सेमिनार आयोजित करने चाहिये। महिलाओं का सशक्तिकरण होना चाहिये जिससे वो अपने अधिकारों के प्रति अधिक सचेत हो सकें।
|
hindi
|
The BEST stock camera app replacement available on the market. Free download Camera ZOOM FX 5.1.0 Full APK for your android and feel its power!
● Best Photo Mode! take up to 50 shots in burst mode, and let Camera ZOOM FX decide the best, or choose for yourself!
● Upload photo to Facebook, Twitter, Flickr, WhatsApp, etc.
● Fullscreen shutter: click anywhere on screen to shoot!
|
english
|
Age Upper Age limit as on 19.10.2018 : 40 yrs.
• MBBS / BDS / BAMS / BHMS / PHARMA D from a recognized university and Full time MBA in Material Management / Hospital Administration with minimum 10 years experience post MBA in Material Management.
• Out of the total required experience minimum 5 years must be in Senior position.
• The incumbent must have adequate knowledge of computerized operations, procurement, inventory management, well versed in FDA rules and procedures as per Drugs and Cosmetics Act. Candidate should be able to lead a team of Pharmacists for functions of Purchase, Stores, Sales & Administration in a dispensary..
Age Upper Age limit as on 19.10.2018 : 30 yrs.
B.Sc. in Radiological Imaging Technology from a recognized University OR B.Sc. in any subject / B. Pharmacy with minimum two years Diploma in Medical Imaging Technology from State Board of Technical Education or any equivalent Diploma from a recognized Board / University. Candidates should have minimum 01 year Internship / experience from a large hospital with experience in CT and MRI.
1. Age & experience will be reckoned as on the last date of online application.Experience will be reckoned post required qualifications.
2. Reservation of posts under various categories shall be applicable as per Govt. Rules.
5. Candidates may be offered a higher or lower grade than what is advertised based on their working experience, research track record and overall assessment at the time of interview and recommendation of the Selection Committee.
(a) Allowances : In addition to pay, other allowances including DA, HRA, TA etc. will be admissible as per the prevailing rules of TMC.
(b) Training &Development : All officers will be eligible for institutional financial support for active participation in National and International Medical Meetings, Workshops and Conferences after their probation is closed.
(c) Medical Facility : Will be admissible as per the prevailing rules of TMC.
(d) Accommodation : Residential accommodation will be provided subject to availability.
(e) Retirement Benefits : All are eligible for retirement benefits and pension under the New Pension Scheme.
7. Candidates appointed will be rotated in any Units of TMC (Tata Memorial Hospital,Mumbai, ACTREC,Mumbai, HomiBhabha Cancer Hospital,Sangrur, HomiBhabha Cancer Hospital & Research Centre,Mullanpur, HomiBhabha Cancer Hospital & Research Centre,Vizag,MahamanaPt.Madan Mohan Maliviya CancerCentre(BHU),Varanasi) on the needs of the Units concerned as and when necessary.
8. The TMC also may exercise the option to offer appointments on “Contract Basis” for a fixed term on a consolidated remuneration.
[I] Candidate shall submit a recent passport size photograph, attested copies of following certificate as a proof of date of birth, qualification, experience, age relaxation for reserved category & Persons with Disability along with the copy of online application form on or before the last date of receiving the application to the H.R.D. Department, 2nd floor, Service Block, Tata Memorial Hospital, Parel, Mumbai – 400 012. It is mandatory to submit a copy of Online application along with copies of relevant certificates, otherwise the candidature will be treated as cancelled.
(ii) Educational Qualification : Mark sheet & Passing Certificate of final examination. (iii) Experience Certificates : Past Employment : Experience certificate indicating the date of joining and relieving. Current Employment : Appointment letter , last Pay Slip, Identity Card.
[II] Through Proper Chanel : Persons working under Central / State Government / Autonomus Body / Semi Government Organisations and other Public Sector Undertakings must submit their application through the head of the organization.
10. Tata Memorial Centre also reserves the right not to call any candidates to appear for Written examination / Interview / Skill test without assigning any reason there of.
11. Tata Memorial Centre reserves the right to fix minimum eligibility standard / bench mark and restrict no. of candidates called for Written examination / Interview / Skill test taking into account various factors like no. of vacancies, percentage of marks in Graduate / Post Graduate Degrees etc. Tata Memorial Centre also reserves the right to fix minimum eligibility standard / cut-off marks (Group / Stream / Discipline / Category-wise etc.) while finalizing such candidates to be called for Written test / Interview / Skill test as well as selecting the candidates for final selection after Written test / Interview / Skill test. The decision of the Director, Tata Memorial Centre in this regard shall be final and binding and no correspondence in this regard will be entertained with the candidates.
12. Tata Memorial Centre reserves the right to restrict the number of candidates called for the Written Examination / Interview / Skill test to a reasonable limit, on the basis of qualifications and experience of the applicants. Mere fulfilling the prescribed qualifications will not entitle an applicant to be called for Written test / Interview / Skill test.
13. In case it is found at any stage of recruitment that the candidate does not fulfill the eligibility criteria and / or, the candidate has furnished any incorrect / false / incomplete information or has suppressed any material fact (s), his / her candidature will be cancelled. If any shortcoming is detected, even after appointment, the services of the candidate are liable to be terminated forthwith. Therefore, before applying for any post, the candidate should ensure that he / she fulfills all the eligibility criteria under the norms mentioned in the advertisement.
14. Non Receipt of Application :Tata Memorial Centre does not take any responsibility for non receipt of application through Online / By post for whatsoever be the reason.
15. Late and incomplete applications will be rejected. Canvassing in any form will disqualify the candidature.
16. Legal jurisdiction for any dispute will be at Guwahati.
Last date for online application is 19.10.2018 upto 05.30 p.m. (Indian Standard Time) & receiving hard copy of online applications within 7 days from last date of Online application i.e. 26.10.2018 at H.R.D.Department,2nd floor, Service Block, Tata Memorial Hospital, Parel, Mumbai-400012..
|
english
|
१० में परिणाम बायोमेडिकल इंजीनियरिंग, युनाइटेड स्टेट्स ऑफ अमेरिका
बायोमेडिकल इंजीनियरिंग विभाग मास्टर ऑफ साइंस (एमएस) और डॉक्टर ऑफ फिलॉसफी (पीएचडी) डिग्री के साथ-साथ पेशेवर विकास की मांग करने वाले या एमएस कार्यक्रम के प्रवेश द्वार के लिए एक ... अधिक पढ़ें
बायोमेडिकल इंजीनियरिंग विभाग मास्टर ऑफ साइंस (एमएस) और डॉक्टर ऑफ फिलॉसफी (पीएचडी) डिग्री के साथ-साथ पेशेवर विकास की मांग करने वाले या एमएस कार्यक्रम के प्रवेश द्वार के लिए एक प्रमाणपत्र कार्यक्रम के लिए अग्रणी कार्यक्रम प्रदान करता है। कम पढ़ें
बायोमेडिकल इंजीनियरिंग में विज्ञान के मास्टर छात्रों को चिकित्सा और जीव विज्ञान में समस्याओं के लिए इंजीनियरिंग सिद्धांतों को लागू करने के लिए तैयार करने के लिए डिज़ाइन किया ग ... अधिक पढ़ें
बायोमेडिकल इंजीनियरिंग में विज्ञान के मास्टर छात्रों को चिकित्सा और जीव विज्ञान में समस्याओं के लिए इंजीनियरिंग सिद्धांतों को लागू करने के लिए तैयार करने के लिए डिज़ाइन किया गया है; मानव स्वास्थ्य को बेहतर बनाने के लिए जीविका प्रणालियों और उपकरणों के संश्लेषण और जीने की विशेषताओं को समझना और उनका उपयोग करना। कम पढ़ें
उत्तरी टेक्सास विश्वविद्यालय में बायोमेडिकल इंजीनियरिंग विभाग शिक्षित और अच्छी तरह से गोल, जानकार बायोमेडिकल इंजीनियरों को बनाने के लिए प्रतिबद्ध है जो टेक्सास, संयुक्त राज्य ... अधिक पढ़ें
उत्तरी टेक्सास विश्वविद्यालय में बायोमेडिकल इंजीनियरिंग विभाग शिक्षित और अच्छी तरह से गोल, जानकार बायोमेडिकल इंजीनियरों को बनाने के लिए प्रतिबद्ध है जो टेक्सास, संयुक्त राज्य अमेरिका और दुनिया में लोगों के लिए जीवन की गुणवत्ता में सुधार के बारे में भावुक हैं। कम पढ़ें
बायोकेमिकल इंजीनियरिंग जीवित प्रणालियों के संचालन सिद्धांतों, जैविक सामग्री के गुणों और जैविक सामग्री के प्रसंस्करण और कोशिकाओं, एंजाइमों और एंटीबॉडी जैसे जैविक एजेंटों का उपय ... अधिक पढ़ें
बायोकेमिकल इंजीनियरिंग जीवित प्रणालियों के संचालन सिद्धांतों, जैविक सामग्री के गुणों और जैविक सामग्री के प्रसंस्करण और कोशिकाओं, एंजाइमों और एंटीबॉडी जैसे जैविक एजेंटों का उपयोग करने वाली प्रक्रियाओं में उपयोग करता है। कम पढ़ें
यदि आप विशेष रूप से मानव जीवन को बेहतर बनाने के लिए उपयोगी उत्पादों को डिजाइन करने के लिए वैज्ञानिक जानकारी का उपयोग करने के लिए उत्सुक हैं, तो आप बायोमेडिकल इंजीनियरिंग में अ ... अधिक पढ़ें
यदि आप विशेष रूप से मानव जीवन को बेहतर बनाने के लिए उपयोगी उत्पादों को डिजाइन करने के लिए वैज्ञानिक जानकारी का उपयोग करने के लिए उत्सुक हैं, तो आप बायोमेडिकल इंजीनियरिंग में अपना करियर शुरू कर सकते हैं। संक्षेप में, बायोमेडिकल इंजीनियरिंग जैविक या चिकित्सा अनुप्रयोगों का एक बहु-विषयक क्षेत्र है जिसमें इंजीनियरिंग सिद्धांत या इंजीनियरिंग उपकरण शामिल हैं। कम पढ़ें
एनजेआईटी में बायोमेडिकल इंजीनियरिंग में मास्टर कार्यक्रम दवा में नैदानिक समस्याओं को हल करने में इंजीनियरिंग, विज्ञान और गणित के सिद्धांतों और प्रथाओं के अनुप्रयोग पर जोर दे ... अधिक पढ़ें
एनजेआईटी में बायोमेडिकल इंजीनियरिंग में मास्टर कार्यक्रम दवा में नैदानिक समस्याओं को हल करने में इंजीनियरिंग, विज्ञान और गणित के सिद्धांतों और प्रथाओं के अनुप्रयोग पर जोर देता है। एनजेआईटी छात्रों को अपने करियर की जरूरतों को पूरा करने के लिए अपने कार्यक्रम को अनुकूलित करने का अवसर प्रदान करता है; वे मेडिकल इमेजिंग, टिशू इंजीनियरिंग, ऑर्थोपेडिक इम्प्लांट्स और मेडिकल इंस्ट्रुमेंटेशन के डिजाइन में कक्षाओं का चयन कर सकते हैं। कम पढ़ें
इंजीनियरिंग और एप्लाइड साइंसेज के दोनों हजिम स्कूल और रोचेस्टर विश्वविद्यालय में बायोमेडिकल इंजीनियरिंग में चिकित्सा एवं दंत चिकित्सा, स्नातकोत्तर कार्यक्रम के स्कूल के साथ स ... अधिक पढ़ें
इंजीनियरिंग और एप्लाइड साइंसेज के दोनों हजिम स्कूल और रोचेस्टर विश्वविद्यालय में बायोमेडिकल इंजीनियरिंग में चिकित्सा एवं दंत चिकित्सा, स्नातकोत्तर कार्यक्रम के स्कूल के साथ संबद्ध कम पढ़ें
बायोमेडिकल इंजीनियरिंग, चिकित्सा और जैविक समस्याओं में अंतःविषय अंतर्दृष्टि प्रदान करने के लिए इंजीनियर, चिकित्सकों, और वैज्ञानिकों के बीच सहयोग पर आ रही है कि इंजीनियरिंग में ... अधिक पढ़ें
बायोमेडिकल इंजीनियरिंग, चिकित्सा और जैविक समस्याओं में अंतःविषय अंतर्दृष्टि प्रदान करने के लिए इंजीनियर, चिकित्सकों, और वैज्ञानिकों के बीच सहयोग पर आ रही है कि इंजीनियरिंग में एक उभरती अनुशासन है। क्षेत्र अपने स्वयं के ज्ञान का आधार है और कोलंबिया में बायोमेडिकल इंजीनियरिंग विभाग द्वारा डिजाइन शैक्षणिक कार्यक्रमों के लिए नींव हैं कि सिद्धांतों का विकास किया है। में एमएस कार्यक्रम ... कम पढ़ें
बायोमेडिकल इंजीनियरिंग में मास्टर ऑफ साइंस आपको बायोमेडिकल उद्योग में अनुसंधान और विकास करियर के लिए अच्छी तरह से स्थिति देगा। यह आपको डॉक्टरेट स्तर पर अतिरिक्त प्रशिक्षण के ल ... अधिक पढ़ें
बायोमेडिकल इंजीनियरिंग में मास्टर ऑफ साइंस आपको बायोमेडिकल उद्योग में अनुसंधान और विकास करियर के लिए अच्छी तरह से स्थिति देगा। यह आपको डॉक्टरेट स्तर पर अतिरिक्त प्रशिक्षण के लिए भी तैयार करता है। कम पढ़ें
|
hindi
|
Terrorists have used food stamp fraud to finance attacks in the U.S. and abroad, according to a new report.
The Boston bombers reportedly took $100,000 in public assistance.
The terrorists sometimes use an informal banking system to obscure the money trail.
Terrorists have repeatedly trafficked food stamps to finance attacks in a form of “welfare jihad,” according to a new report.
The Government Accountability Institute found that the Boston Marathon bombers conducted various forms of fraud, including through the Supplemental Nutritional Assistance Program, or SNAP, program, which was formerly called food stamps. Others have committed similar fraud in a scheme GAI called “welfare jihad,” where taxpayer money is used to fund domestic and international attacks.
The Boston bombers, who took $100,000 in public assistance, including through subsidized housing, food and welfare, read an English-language al-Qaeda magazine that taught them how to make their bomb, and encouraged readers to “steal money from disbelievers … as a form of jihad,” GAI reported.
The pair detonated a bomb during the 2013 Boston Marathon, killing three and injuring hundreds more. One brother died during the subsequent manhunt and the other was sentenced to death.
Ali Ugas Mohamud of Arlington, Texas, ran a store that stole $1.4 million in food stamp funds, GAI reported. Mohamud would purchase food stamps and would wire his profits to Somalia. He was sentenced to nearly five years in prison in 2013.
Such fraud is made possible by the expansion of food stamp benefits and the lack of screening into whether recipients are actually eligible, according to GAI.
The report provided numerous examples of fraudsters exploiting the food stamp program to fund terrorist activities.
A grocery store owner in Chicago was imprisoned in 2006 for stealing $1.4 million through food stamp fraud and aiding the Palestinian Islamic Jihad.
In 2010, two Somali store owners in Michigan pleaded guilty to food stamp fraud and running an unlicensed money transfer business that sent money to “hot spots” in the Middle East and Africa.
In Indianapolis, a ring of convenience stores bought food stamps from customers for 50 cents on the dollar. The ringleader was arrested on a return flight from Yemen, and prosecutors suspected possible terror links.
When police noticed that they were taking suitcases full of cash through Sea-Tac Airport in Seattle, the FBI found a “staggering” amount of money, one agent said. The travelers were working for hawalas — businesses that are part of an informal bank system often used to bundle and send money abroad with little paper trail, GAI reported.
When the FBI tracked down 10 clients who were sending these sums of cash out of the U.S., every single one of them was drawing welfare benefits from the U.S.
Waad Ramadan Alwan, who spent eight years helping kill American troops in Iraq, came to the U.S. as a “refugee” and was granted welfare programs. It was only when his fingerprints were discovered on a bomb that anyone stopped this. Alwan was sentenced to 40 years in prison in 2013 for terrorist activities.
The vulnerabilities have long been known, but the Department of Agriculture has not closed loopholes, according to the report. The way the food stamp system shares responsibilities between the federal government and the states creates perverse incentives to stop fraud.
|
english
|
As a Principal and Project Manager, Melissa Pesci is responsible for overall client satisfaction and support of the HGA team from start to finish. Melissa is a licensed architect who brings 12 years of experience in space planning, furniture selection and interior design. Over the course of her career, she has directed program development and strategic planning for notable organizations in technology and professional services, leading a team of skilled professionals to produce highly original, custom solutions for HGA’s clients.
|
english
|
گۄڈٕ گژھ جلدی بازرس منٛز
|
kashmiri
|
They *used to be* called thongs before everyone started calling them "flip flops". If a fly gets into the house, I've been known to just take one of them off and swat him with it.... In the video below, the white stick is the handle of the fly swatter that contains the oscillator (I have removed the wire meshes). Two protruding wires (red and white) are the high voltage direct current output which originally attached to the removed wire meshes.
Flip Flop Fly Ball Baseball art and infographics. Examining the sport through drawings, maps, charts, and graphs.... First you’ll need some cheapo flip flops. I got the $1 pairs at Walmart but I like the $1 Old Navy flip flops best. You’ll also need scissors, 1/4 yard of fabric (or fabric scraps measuring 4 1/2 X 24 inches – 4 pieces), and hot glue (optional).
Before we get into the meat of today’s discussion of cleaning your fashion 'flops, a word on men in flip-flops: I am pro, provided that one respects the standard clause that if you’re going to bare your toes to the world, good citizenship dictates that you tend to your feet, in the grooming sense. how to find 12 percent 2/04/2016 · In the video below, the white stick is the handle of the fly swatter that contains the oscillator (I have removed the wire meshes). Two protruding wires (red and white) are the high voltage direct current output which originally attached to the removed wire meshes.
You searched for: flip flop party! Etsy is the home to thousands of handmade, vintage, and one-of-a-kind products and gifts related to your search. No matter what you’re looking for or where you are in the world, our global marketplace of sellers can help you find unique and … how to make a fish with effortless speed swaylocks Before we get into the meat of today’s discussion of cleaning your fashion 'flops, a word on men in flip-flops: I am pro, provided that one respects the standard clause that if you’re going to bare your toes to the world, good citizenship dictates that you tend to your feet, in the grooming sense.
2/04/2016 · In the video below, the white stick is the handle of the fly swatter that contains the oscillator (I have removed the wire meshes). Two protruding wires (red and white) are the high voltage direct current output which originally attached to the removed wire meshes.
You can create a firm base that won’t dull your cutter by duct-taping a piece of Styrofoam to a section of 2-by-4. With a handheld drill, use measured force and back out when you feel the cutter punch through the flip-flop.
8/05/2013 · OluKai and Clarks, for example, both make flip flops that look nice and form to your feet in a way similar to the Oakleys, without the massive "O" they're compelling to stick on everything they sell.
First you’ll need some cheapo flip flops. I got the $1 pairs at Walmart but I like the $1 Old Navy flip flops best. You’ll also need scissors, 1/4 yard of fabric (or fabric scraps measuring 4 1/2 X 24 inches – 4 pieces), and hot glue (optional).
|
english
|
\begin{document}
\tildetle{Non-topological condensates for the self-dual Chern-Simons-Higgs model}
\author{Manuel del Pino\footnote{Departamento de Ingenier\'ia Matem\'atica
and CMM, Universidad de Chile, Casilla 170, Correo 3, Santiago,
Chile. E-mail: delpino@dim.uchile.cl. Author supported by grants
Fondecyt 1070389 and FONDAP (Chile).}\quad Pierpaolo Esposito
\footnote{Dipartimento di Matematica e Fisica, Universit\`a degli
Studi ``Roma Tre", Largo S. Leonardo Murialdo, 1 -- 00146 Roma,
Italy. E-mail: esposito@mat.uniroma3.it. Author supported by the
PRIN project ``Critical Point Theory and Perturbative Methods for
Nonlinear Differential Equations" and the Firb-Ideas project
``Analysis and Beyond".} \quad Pablo
Figueroa\footnote{Departamento de Matem\'atica, Pontificia
Universidad Cat\'olica de Chile, Avenida Vicuna Mackenna 4860,
Macul, Santiago, Chile. E-mail: pfigueros@mat.puc.cl. Author
supported by grants Proyecto Anillo ACT-125 and Fondecyt
Postdoctorado 3120039 (Chile).} \quad Monica Musso
\footnote{Departamento de Matem\'atica, Pontificia Universidad Cat\'olica de Chile, Avenida
Vicuna Mackenna 4860, Macul, Santiago, Chile. E-mail:
mmusso@mat.puc.cl. Author supported by Fondecyt grant 1040936
(Chile), and by PRIN project ``Metodi variazionali e
topologici nello studio di fenomeni non lineari''.}}
\title{Non-topological condensates for the self-dual Chern-Simons-Higgs model}
\begin{abstract}
\noindent For the abelian self-dual Chern-Simons-Higgs model we
address existence issues of periodic vortex configurations -- the
so-called condensates-- of non-topological type as $k \to 0$,
where $k>0$ is the Chern-Simons parameter. We provide a
positive answer to the long-standing problem on the existence of
non-topological condensates with magnetic field concentrated at
some of the vortex points (as a sum of Dirac measures) as $k \to
0$, a question which is of definite physical interest.
\epsilonnd{abstract}
\vskip 0.2truein
\noindent {\bf Keywords}:
\noindent {\bf AMS subject classification}:
\vskip 0.2truein
\section{Introduction and statement of main results}
The Chern-Simons vortex theory is a planar theory which is
physically relevant in connection with high critical temperature
superconductivity, the quantum Hall effect and anyonic particle
physics, as widely discussed by Dunne \cite{D}. Hong-Kim-Pac
\cite{HKP} and Jackiw-Weinberg \cite{JW} have proposed an abelian
self-dual model where the electrodynamics is governed only by the
Chern-Simons term. Over the Minkowski space
$(\mathbb{R}^{1+2},g)$, with metric tensor $g=\hbox{diag
}(1,-1,-1)$, the model is described by the following Lagrangean
density:
$${\cal L}_({\cal A},\phi)=\frac{k}{4}\epsilon^{\alphaha \beta
\gammama}A_\alphaha F_{\beta \gammama}+D_\alphaha \phi \overline{D^\alphaha
\phi}-\frac{1}{k^2}|\phi|^2\left(|\phi|^2-1 \right)^2,$$
where the Chern-Simons coupling parameter $k>0$ measures the
strenght of the Chern-Simons term and the antisymmetric
Levi-Civita tensor $\epsilon^{\alphaha \beta \gammama}$ is fixed with $\epsilon^{0
1 2}=1$. The metric tensor $g$ is used to lower and raise indices
in the usual way, and the standard summation convention over
repeated indices is adopted. The gauge potential ${\cal A}=-i
A_\alphaha dx^{\alphaha}$ is a $1$-form (a connection over the
principal bundle $\mathbb{R}^{1+2}\tildemes U(1)$), $
A_\alphaha:\mathbb{R}^{1+2}\to \mathbb{R}$ for $\alphaha=0,1,2$, and
the Higgs field $\phi:\mathbb{R}^{1+2} \to \mathbb{C}$ is the
matter field. The gauge field $F_{\cal A}=-\frac{i}{2}F_{\alphaha
\beta}dx^\alphaha \wedge dx^\beta$ is a $2-$form (the curvature of
${\cal A}$), where $F_{\alphaha \beta}=\partial_\alphaha
A_\beta-\partial_\beta A_\alphaha$, and the Higgs field $\phi$ is
weakly coupled with the gauge potential ${\cal A}$ through the
covariant derivative $D_A$ as follows: $D_A \phi=D_\alphaha \phi\,
dx^\alphaha$, $D_\alphaha \phi=\partial_\alphaha \phi-i A_\alphaha \phi$
for $\alphaha=0,1,2$.
\noindent The self-dual regime has been identified by Hong-Kim-Pac \cite{HKP} and
Jackiw-Weinberger \cite{JW} through the choice of the
``triple-well" potential $\frac{1}{k^2}|\phi|^2 (|\phi|^2-1)^2$
which yields to a Bogomol'nyi reduction \cite{Bog} for the
Chern-Simons-Higgs model, as we discuss below. Vortices are
time-independent ($x^0$ is the time-variable) configurations
$(\mathcal{A},\phi)$ which solve the Euler-Lagrange equations
\begin{equation} \lambdabdabel{ELequations}
\left\{ \begin{array}{l} \displaystyle D_\mu D^\mu \phi=-\frac{1}{k^2}(|\phi|^2-1) (3|\phi|^2-1) \phi \\
\displaystyle \frac{k}{2}\epsilon^{\mu \alphaha \beta} F_{\alphaha
\beta}=J^\mu:=i \left(\phi \overline{D^\mu
\phi}-\overline{\phi}D^\mu \phi\right) \epsilonnd{array} \right.
\epsilonnd{equation}
and have finite energy. In the self-dual regime, for
energy-minimizing vortices (at given magnetic flux) the
second-order Euler-Lagrange equations are equivalent to the
first-order self-dual equations
\begin{equation}\lambdabdabel{CSeqs}
\left\{ \begin{array}{l} D_\pm\phi=0 \\
F_{12} \pm \frac{2}{k^2}|\phi|^2(|\phi|^2-1)=0 \\
kF_{12}+2A_0|\phi|^2=0,
\epsilonnd{array}\right.
\epsilonnd{equation}
where $D_{\pm}=D_1 \pm i D_2 $ and the last equation is usually
referred to as the Gauss law. In the sequel, we restrict our
attention to energy-minimizing vortices (at given magnetic flux),
and we will simply refer to them as vortices.
\noindent In the physical interpretation, the electric field $\varepsilonc{E}=(\partial_1 A_0,\partial_2 A_0,0)$ is planar, the magnetic field $\varepsilonc{B}=(0,0,F_{1,2})$ is in the orthogonal direction, and $J^0$, $\varepsilonc{J}=(J^1,J^2)$ can be identified with the charge density, current density, respectively, as in the classical Maxwell theory. Thanks to the Gauss law, vortices are both electrically and magnetically charged, a physical relevant property which was absent in the abelian Maxwell-Higgs model \cite{JaTa,Taubes}. Notice that ${\cal A}$ and $\phi$ are not observable quantities, as they are defined only up to a gauge transformation, whereas the
electric and magnetic fields as well as the magnitude $|\phi|$ of
the Higgs field define gauge-independent quantities. The second
and third equations in (\ref{CSeqs}) only involve observable
quantities, whereas the first one $D_+ \phi=0$ (or $D_-\phi=0$) --
a gauge invariant version of the Cauchy-Riemann equations--
implies holomorphic-type properties for the Higgs field $\phi$ (or
$\bar{\phi}$) in a suitable gauge. Following an approach first
developed by Taubes \cite{Taubes} for the abelian Maxwell-Higgs
model, vortices $(\phi,\mathcal{A})$ can be found in the form:
\begin{equation} \lambdabdabel{1917}
\phi=e^{\frac{u}{2} \pm i\sum_{j=1}^N Arg(z-p_j)},\quad A_0=\pm
\frac{1}{k}(|\phi|^2-1), \quad A_1\pm iA_2=-i (\partial_1\pm
i\partial_2) \log \phi
\epsilonnd{equation}
as soon as $u=\log |\phi|^2$ does solve the elliptic problem
\begin{equation}\lambdabdabel{1}
-\displaystyleelta u= \frac{1}{\epsilonpsilon^2}e^u(1-e^u)-4\pi \sum_{j=1}^N
\deltalta_{p_j},
\epsilonnd{equation}
where $\epsilonpsilon=\frac{k}{2}$ and $p_1,\dots,p_N$ are the zeroes of
$\phi$ (repeated according to their multiplicities)-- usually
referred to as the vortex points (with the convention $N=0$ if
$\phi \not= 0$). We refer the interested reader to
\cite{Tbook,Ybook} and the references therein for more details and
for an extensive discussion of several gauge field theories.
\noindent For planar vortices, the finite energy condition $\int_{\mathbb{R}^2} e^u(1-e^u)<+\infty$ imposes two possible asymptotic behaviors at infinity. The topological behavior $|\phi|^2=e^u \to 1$ as $|z|\to \infty$ gives the vortex number $N$ the topological meaning of winding number for $\phi$ at infinity (up to a $\pm$ sign, depending on whether $D_+ \phi=0$ or $D_-\phi=0$), yielding to quantization
effects for the energy $E$, the magnetic flux $\Phi$ and the
electric charge $Q$ in the class of topological $N-$vortices:
$E=2\pi N$, $\Phi=\pm 2\pi N$ and $Q=\pm 2\pi kN$. The existence
of planar topological vortices has been addressed in
\cite{H,SY2,RWa}. The non-topological behavior $|\phi|^2=e^u \to
0$ as $|z|\to \infty$ has no counterpart in the abelian
Maxwell-Higgs model, and the possible coexistence of topological
and non-topological $N-$vortices is the main new feature in
Chern-Simons theories. After the seminal work \cite{SY1} in a
radial setting with a single vortex point (see also \cite{CHMY}
for related results), it has been a challenging problem to find
planar non-topological $N-$vortices \cite{ChI,CFL} for an
arbitrary configuration of $p_1,\dots,p_N$. Surprisingly, two
different classes have been found by using different limiting
problems: the singular Liouville equation in \cite{ChI} or the
Chern-Simons equation $-\displaystyleelta U=e^U(1-e^U)-4\pi \deltalta_0$ in
\cite{CFL}. Since the latter problem has no scale-invariance, in \cite{CFL} the points
$p_1,\dots,p_N$ are taken along the vertices of a regular
$N-$polygon in order to glue together $U(\frac{x-p_j}{\epsilonpsilon})$, $j=1,\dots,N$,
for there is no freedom to adjust the height at each $p_j$ to
account for the interaction, but the approximating function has
invertible linearized operator.
\noindent Since the theoretical prediction by Abrikosov \cite{Abr}, the appearance of lattice structure, in the form of spatially periodic vortices, has been experimentally observed. To account for it, the model is formulated on
$${\Omega}ega=\{z=t\omega_1+s \omega_2: \:(t,s) \in (-\frac{1}{2},\frac{1}{2}) \tildemes (-\frac{1}{2},\frac{1}{2})\},$$
where $\omega_1,\: \omega_2 \in \mathbb{C} \setminus \{0\}$
satisfy $\hbox{Im }(\frac{\omega_2}{\omega_1})>0$. Condensates are
time-independent configurations $(\mathcal{A},\phi)$ which solve
the Euler-Lagrange equations \epsilonqref{ELequations}, have finite
energy and satisfy the 't Hooft boundary conditions \cite{tHo}:
\begin{equation}\lambdabdabel{tH}
e^{i\xi_k(z+\omega_k)}\phi(z+\omega_k)=e^{i\xi_k(z)}\phi(z),\quad
A_0(z+\omega_k)=A_0(z), \quad \left(A_j +\partial_j
\xi_k\right)(z+\omega_k)=\left(A_j +\partial_j \xi_k\right)(z)
\epsilonnd{equation}
for all $z\in \Gamma^1\cup \Gamma^2 \setminus \Gamma^k$ and
$k=1,2$, where $\Gamma^1=\{z=t \omega_1
-\frac{1}{2}\omega_2:\:|t|<\frac{1}{2} \}$,
$\Gamma^2=\{z=-\frac{1}{2}\omega_1+t \omega_2:\:|t|<\frac{1}{2}
\}$ and $\xi_1$, $\xi_2$ are real-valued smooth functions defined
in a neighborhood of $\Gamma^2 \cup\{\omega_1+\Gamma^2\}$,
$\Gamma^1 \cup\{\omega_2+\Gamma^1\}$, respectively. For
energy-minimizing vortices (at given magnetic flux) the
Euler-Lagrange equations \epsilonqref{ELequations} are still equivalent
to the self-dual ones \epsilonqref{CSeqs}. Since \epsilonqref{tH} just reduces
to a double periodicity for the observable quantities $F_{12}$ and
$|\phi|$ in ${\Omega}ega$, a configuration $(\mathcal{A},\phi)$ in the
form \epsilonqref{1917} does solve \epsilonqref{CSeqs} as soon as $u=\log
|\phi|^2$ is a doubly-periodic solution of \epsilonqref{1} in ${\Omega}ega$,
see \cite{CY,T} for an exact derivation.
\noindent Hereafter, up to a translation, let us assume that $\phi\not=0$ on
$\partial {\Omega}ega$ (i.e. $p_1,\dots,p_N \in {\Omega}ega$) in such a way the winding number $\hbox{deg
}(\phi,\partial{\Omega}ega,0)$ is well-defined, and the vortex number $N$ is simply
given by $|\hbox{deg }(\phi,\partial{\Omega}ega,0)|$. By \epsilonqref{tH} we
still have quantization effects as in the case of planar
topological vortices: $E=2\pi N$, $\Phi=\pm 2\pi N$ and $Q=\pm
2\pi kN$ , where the $\pm$ sign depends on whether $D_+\phi=0$ or
$D_-\phi=0$. Hereafter, up to change $\phi$ with $\bar \phi$, let
us assume that $D_+\phi=0$ and restrict our attention to
energy-minimizing condensates (at given magnetic flux), simply
referred to as condensates.
\noindent Letting $G(z,p)$ be the Green function of $-\displaystyleelta$ in
${\Omega}ega$ with pole at $p$:
$$\left\{ \begin{array}{ll} -\displaystyleelta G(z,p)=\deltalta_p-\frac{1}{|{\Omega}ega|}&\hbox{in }{\Omega}ega\\
\int_{\Omega}ega G(z,p)dz=0, & \epsilonnd{array}\right.$$
one is led to consider the following equivalent regular version of
(\ref{1}):
\begin{equation}
\lambdabdabel{2}-\displaystyleelta v=\frac{1}{\epsilonpsilon^2} e^{u_0+v}
(1-e^{u_0+v})-\frac{4\pi N}{|{\Omega}ega|}\qquad \hbox{in }
{\Omega}ega\epsilonnd{equation} in terms of $v=u-u_0$, where $u_0=-4\pi
\displaystyle \sum_{j=1}^N G(z,p_j)$ and the potential $e^{u_0}$
is a smooth non-negative function which vanishes exactly at
$p_1,\dots,p_N$. By translation invariance, notice that
$G(z,p)=G(z-p,0)$, and $G(z,0)$ can be decomposed as
$G(z,0)=-{1\over 2\pi}\log|z|+H(z)$, where $H$ is a (not
doubly-periodic) function with $\displaystyleelta H= \frac{1}{|{\Omega}ega|}$ in
${\Omega}ega$. If $v$ is a solution of \epsilonqref{2}, by integration over
${\Omega}$ notice that
\begin{equation}\lambdabdabel{ci0}
\int_{\Omega} e^{u_0+v}(1-e^{u_0+v})=\int_{\Omega}
|\phi|^2(1-|\phi|^2)=2\epsilon^2 \int_{\Omega}ega F_{12} =4\pi N\epsilon^2
\epsilonnd{equation}
in view of \epsilonqref{CSeqs}, yielding to the necessary condition
$$16\pi N\epsilon^2=|{\Omega}ega| -4 \int_{\Omega}\left(e^{u_0+v}-{1\over2}\right)^2<|{\Omega}ega|$$
for the solvability. According to \cite{CY}, Caffarelli and Yang
show the existence of $0<\epsilon_c< \sqrt{\frac{|{\Omega}ega|}{16\pi N}}$ so
that \epsilonqref{1} has a maximal doubly-periodic solution $u_\epsilon$ for
$0<\epsilonpsilon<\epsilonpsilon_c$, while no solution exists for $\epsilon >\epsilon_c$.
Notice that \epsilonqref{2} admits a variational structure with energy
functional
$$J_\epsilon(v)={1\over2}\int_{\Omega}|\nabla
v|^2+{1\over2\epsilon^2}\int_{\Omega}\left(e^{u_0+v}-1\right)^2+{4\pi
N\over|{\Omega}|}\int_{\Omega} v$$
where $v \in H^1({\Omega})=\{v \in H_{\text{loc}}^1({\mathbb{R}}^2):\, v\text{
doubly periodic in }{\Omega}\}$. Later, Tarantello \cite{T} shows that
the maximal solution $u_\epsilon$ is a local minimum for $J_\epsilon$ in
$H^1({\Omega})$, and a second solution $u^\epsilon$ is found as a
mountain-pass critical point for $J_\epsilon$.
\noindent To each solution $u$ of \epsilonqref{1} we can associate the $N-$condensate $(\mathcal{A},\phi)$ in the form \epsilonqref{1917} (with the $+$ sign as we agreed), and let $(\mathcal{A}_\epsilon,\phi_\epsilon)$, $(\mathcal{A}^\epsilon,\phi^\epsilon)$ be the ones corresponding to $u_\epsilon$, $u^\epsilon$. Concerning the asymptotic behavior as $\epsilon \to 0$, by \epsilonqref{ci0} we can expect two classes of $N-$condensates:
\begin{itemize}
\item $|\phi| \to 1$ as $\epsilon \to 0$ (``topological" behavior),
\item $|\phi| \to 0$ as $\epsilon \to 0$ (``non-topological" behavior),
\epsilonnd{itemize}
to be understood in suitable norms. For example,
$(\mathcal{A}_\epsilon,\phi_\epsilon)$ exhibits ``topological" behavior:
$$|\phi_\epsilon| \to 1 \hbox{ in }C_{\hbox{loc}}(\bar{\Omega}\setminus\{p_1,\dots, p_N\}),$$
with
\begin{equation} \lambdabdabel{1820}
(F_{12})_\epsilon \rightharpoonup 2\pi \sum_{j=1}^N \deltalta_{p_j} \quad
\hbox{in the sense of measures}
\epsilonnd{equation}
as $\epsilon \to 0$ according to \epsilonqref{ci0}, see \cite{T}. The
concentration property \epsilonqref{1820} for the magnetic field has a
definite physical interest, and supports the use of the
terminology ``vortex points" for the zeroes $p_1,\dots,p_N$ of the
Higgs field $\phi$. The $N-$condensate $(\mathcal{A}^\epsilon,\phi^\epsilon)$
has in general a different asymptotic behavior as $\epsilon \to 0$:
\begin{itemize}
\item[(i)] when $N=1$, $|\phi^\epsilon|\to 0$ in $C^m(\bar {\Omega}ega)$, for
all $m \geq 0$, and $(F_{12})^\epsilon$ is a compact sequence in
$L^1({\Omega}ega)$ (see \cite{T}); \item[(ii)] when $N=2$,
$|\phi^\epsilon|\to 0$ in $C(\bar {\Omega}ega)$ and either $(F_{12})^\epsilon$ is a
compact sequence in $L^1({\Omega}ega)$ or $(F_{12})^\epsilon \rightharpoonup
4\pi \deltalta_q$ in the sense of measures, for some $q
\not=p_1,\:p_2$ with $u_0(q)=\max_{\Omega}ega u_0$, depending on
whether
$$I(v)={1\over2}\int_{\Omega}|\nabla
v|^2-8\pi \log \left(\int_{\Omega}ega e^{u_0+v} \right) +{8\pi
\over|{\Omega}|}\int_{\Omega} v $$ attains its infimum or not in $H^1({\Omega})$
(see \cite{NoTa3}, and also \cite{DJLW2}); \item[(iii)] when $N\geq
3$, $|\phi^\epsilon|\to 0$ in $C(\bar {\Omega}ega)$ and $ (F_{12})^\epsilon
\rightharpoonup 2\pi N \deltalta_q $ in the sense of measures, for
some $q \not=p_1,\dots,p_N$ with $u_0(q)=\max_{\Omega}ega u_0$ (see
\cite{Ch}).
\epsilonnd{itemize}
In \cite{DJLPW} it is shown the existence of $N-$condensates
$(\mathcal{A},\phi)$ so that $|\phi|\to 0$ a.e. in ${\Omega}ega$ as $\epsilon
\to 0$.
Concerning the case $N=2$, it is a very difficult question, which
has been discussed in \cite{CLW,LiWa} for $p_1=p_2$, to know
whether or not $I$ attains the infimum in $H^1({\Omega}ega)$. An
alternative approach of perturbative type has revelead to be
successful for $N=2$ \cite{LinYan1} (see also \cite{EsFi} among other things) by constructing a
sequence of $2-$condensates for which the second alternative in (ii) does hold, for a critical point $q$ of $u_0$.
The same approach works as well for $N\geq 3$, provided the
concentration points of the magnetic field are not vortex points.
\noindent The existence of non-topological $N-$condensates with magnetic field concentrated at vortex points as $\epsilon \to 0$ (like in \epsilonqref{1820}) is the main issue from a physical viewpoint and has not received an answer so far. A first partial answer has
been provided by Lin and Yan \cite{LinYan} who construct
$N-$condensates $(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $(F_{12})_\epsilon
\rightharpoonup 2 \pi N \deltalta_{p_j}$ in the sense of measures as
$\epsilon \to 0$, as soon as $N>4$ and $p_j$ is a simple vortex point in
$\{p_1,\dots,p_N\}$. As in \cite{CFL}, they make use of the
Chern-Simons equation $-\displaystyleelta U=e^U(1-e^U)-4\pi \deltalta_0$ as
limiting problem, which is not suitable to manage multiple concentration points. Moreover, such a condensate does satisfy
$\max_{\Omega}ega |\phi_\epsilon|\geq c>0$ for $\epsilon$ small and $|\phi_\epsilon|\to
0$ in $C_{\hbox{loc}} (\bar {\Omega}ega \setminus \{p_j\})$, which fits the notion of ``non-topological" behavior in a weak sense. Our aim is to extend to $N-$condensates the
perturbative approach developed by Chae and Imanuvilov \cite{ChI}
for planar $N-$vortices, based on the use of the singular
Liouville equation as limiting problem. As far as non-topological behavior, let us stress that the problem on the torus is much more rigid than the planar case,
as well illustrated by the quantization property $\Phi=2\pi N$
(valid just in the doubly-periodic situation). For example, when
$F_{12}$ is concentrated like a Dirac measure at a vortex point
$p_l$, by the use of Liouville profiles it is natural, as we will
see, to have $4\pi(n_l+1)$ as concentration mass of $F_{12}$ at
$p_l$, where $n_l$ is the multiplicity of $p_l$ in the set
$\{p_1,\dots,p_N\}$, and then the relation $2\pi N=4\pi
\displaystyle \sum_{l=1}^m (n_l+1)$ does hold as soon as $F_{12}
\rightharpoonup 4\pi \displaystyle \sum_{l=1}^m (n_l+1)
\deltalta_{p_l}$ in the sense of measures. In particular, the
concentration of the magnetic field can not take place at all the
vortex points $p_1,\dots,p_N$ as in the planar case \cite{ChI}. Let us stress that the $N-$condensates constructed in \cite{Nol} have exactly such a concentration property and then violate the balancing condition \epsilonqref{hhh}.
\noindent Our aim is to provide a general answer to the long-standing question on the existence of non-topological $N-$condensates with magnetic field concentrated at some vortex points. Compared with \cite{ChI}, our main result is rather surprising and reads as follows.
\begin{thm} \lambdabdabel{mainbb}
Let $\{p_1,\dots,p_m\}$ be a subset of the vortex set
$\{p_1,\dots,p_N\} \subset {\Omega}ega$, $\{p_j\}_j$ be the remaining points and
$n_l$, $n_j$ be the corresponding multiplicities so that
\begin{equation} \lambdabdabel{hhh}
2\pi N=4\pi \sum_{l=1}^m (n_l+1).
\epsilonnd{equation}
Letting $\mathcal{H}_0$ be a meromorphic function in ${\Omega}ega$
so that $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi \sum_{l=1}^m (n_l+1) G(z,p_l)}$ (which exists and is unique up to rotations), assume that $\mathcal{H}_0$ has zero residue at each $p_1,\dots,p_m$. Letting $\sigmama_0(z)=-\left( \int^z \mathcal{H}_{0}(w) dw \right)^{-1}$ (a well-defined meromorphic function), assume that
\begin{equation} \lambdabdabel{ggg}
D_0=\frac{1}{\pi} \left[ \int_{{\Omega}ega \setminus \sigmama_0^{-1}(B_\rho(0))} e^{u_0+8\pi
\sum_{l=1}^m (n_l+1)G(z,p_l)} - \sum_{l=1}^m (n_l+1)
\int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{dy}{|y|^4}\right]<0
\epsilonnd{equation}
for small $\rho>0$ and the ``non-degeneracy condition"
$\hbox{det }A \not=0$, where $A$ is given by \epsilonqref{matrixA}. Then,
for $\epsilon>0$ small there exists $N-$condensate
$(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $|\phi_\epsilon| \to 0$ in $C(\bar
{\Omega}ega)$ and
\begin{equation} \lambdabdabel{magconc}
(F_{12})_\epsilon \rightharpoonup 4\pi \displaystyle \sum_{l=1}^m (n_l+1) \deltalta_{p_l}
\epsilonnd{equation}
weakly in the sense of measures, as $\epsilon \to 0$.
\epsilonnd{thm}
\noindent Notice that we can also allow some concentration point
not to be a vortex point, by simply adding it to the vortex set
with null multiplicity. In section \ref{examples} we will see that
in the double-vortex case $N=2$ Theorem \ref{mainbb} essentially
recovers the result in \cite{EsFi,LinYan1} concerning single-point
concentration, for the assumptions just reduce to have the
concentration point $q \not=p_1,p_2$ as a non-degenerate critical
point of $u_0$ with $D_0<0$ (for similar
results concerning the Liouville equation, see \cite{BaPa,dkm,EGP} in case of bounded domains with
Dirichlet b.c. and \cite{Fi} in case of a flat two-torus). Despite of the complex statement, for a rectangle ${\Omega}ega$ with $p_1=0$,
$p_2=\frac{\omega_1}{2}$, $p_3=\frac{\omega_2}{2}$ and $p_4=\frac{\omega_1+\omega_2}{2}$, and $n_1,n_2,n_3,n_4$ even
multiplicities with $\frac{n_4}{2}$ odd, we will check in section \ref{examples} that the assumptions of Theorem
\ref{mainbb} do hold for $m=1$ and concentration point $p_1$, up to perform a small translation so to have $p_j \in {\Omega}ega$. For computational simplicity, the ``non-degeneracy condition" will be checked just for a square with $n=n_3=2$ and $(n_1,n_2)=(2,0)$ or viceversa. Even more important, examples with $m\geq 2$ will be discussed in section \ref{general}.
\noindent Following an approach developed by Tarantello \cite{T} and exploited in \cite{NoTa3}, \epsilonqref{2} can be seen as a perturbed mean-field equation \epsilonqref{3} with potential $e^{u_0}$ and unperturbed part
\begin{equation} \lambdabdabel{10100}
-\displaystyleelta w= 4\pi N \left(\frac{e^{u_0+w} }{\int_{\Omega}ega
e^{u_0+w}}-\frac{1}{|{\Omega}ega|}\right).
\epsilonnd{equation}
Since $e^{u_0}$ vanishes like $|z-p_l|^{2n_l}$ near each $p_l$, $l=1,\dots,m$, the Liouville equation $-\displaystyleelta U=|z|^{2n} e^U$ will play a central role in the construction of an approximating function in the perturbative approach. Since $U_{\deltalta,\sigmama_0}=\log \frac{8 \deltalta^2}{(\deltalta^2 +|\sigmama_0|^2)^2}$ does solve $-\displaystyleelta U= |\sigmama_0'|^2 e^U$ in ${\Omega}ega \setminus \{\hbox{poles of }\sigmama_0\}$, a natural choice is $\sigmama_0=z^{n+1}$ when $m=1$ and $p_1=0$. Letting $P$ be a projection operator on the space of doubly-periodic functions, the approximation rate of $PU_{\deltalta,z^{n+1}}$ is unfortunately not sufficiently small to carry out the argument, a problem which often arises in perturbation arguments and is usually overcome by refining the ansatz via linear theory around the approximating function. However, such a procedure would require several subsequent refinements, yielding in general to a high level of complexity. Inspired by \cite{DEM4}, in section \ref{improved} we will take advantage of the Liouville formula to use the inner parameter $\sigmama_0$, present in the Liouville formula, to get improved profiles. Since $PU_{\deltalta,\sigmama_0} \sim U_{\deltalta,\sigmama_0}-\log(8\deltalta^2)+\log |\sigmama_0|^4+8\pi (n+1)G(z,0)$ as $\deltalta \to 0$, $PU_{\deltalta,\sigmama_0}$ is a good approximate solution of \epsilonqref{10100} if $\frac{|\sigmama_0'|^2}{|\sigmama_0|^4}=|(\frac{1}{\sigmama_0})'|^2=e^{u_0+8\pi(n+1)G(z,0)}$. By definition of $\mathcal{H}_0$, it is enough to find a meromorphic $\sigmama_0$ with $(\frac{1}{\sigmama_0})'=\mathcal{H}_0$, a solvable equation if and only if $\mathcal{H}_0$ has zero residue at its unique pole $0$. As we will discuss precisely in Remark \ref{remark2bis}, the assumption on the residues of $\mathcal{H}_0$ is then necessary in our context. Moreover, since $\mathcal{H}_0$ has a pole at $0$ of multiplicity $n+2$ and zeroes $p_j$'s of multiplicities $n_j$, by the property $\mathcal{H}_0(z+\omega_j)=e^{i\theta_j}\mathcal{H}_0(z)$, $j=1,2$, near $\partial {\Omega}ega$ for some $\theta_1,\theta_2 \in \mathbb{R}$ we deduce that
$$0=\frac{1}{2\pi i} \int_{\partial {\Omega}ega} \frac{\mathcal{H}_0'}{\mathcal{H}_0}dz=n+2-\sum_j n_j=2(n+1)-N,$$
providing \epsilonqref{hhh} as a necessary and sufficient condition for the existence of such $\mathcal{H}_0$ (the sufficient part in shown in next section). $D_0<0$ and the ``non-degeneracy condition'' will be necessary to determine $\deltalta$ and $a$, a sort of small translation parameter accounting for the perturbation term in \epsilonqref{3}, according to the asymptotic expansion for the corresponding ``reduced equations" as derived in section \ref{reduced}. Theorem \ref{mainbb} is proved in section \ref{mainresults} when $m=1$ and in section \ref{general} when $m \geq 2$.
\section{Improved Liouville profiles} \lambdabdabel{improved}
\noindent Let us decompose any solution $v$ of (\ref{2}) as
$v=w+c$, where $c=\frac{1}{|{\Omega}ega|}\int_{\Omega}ega v$. In this way,
$w$ has zero average: $\int_{\Omega}ega w dz=0$, and by (\ref{ci0}) one
has
$$e^{2c} \int_{\Omega}ega e^{2u_0+2w}-e^c \int_{\Omega}ega e^{u_0+w}+4\pi N
\epsilonpsilon^2=0.$$ This last identity then provides a relation
between $c$ and $w$ in the form $c=c_\pm (w)$, where
\begin{equation}\lambdabdabel{cc}
e^{c_\pm(w)}=\frac{8\pi N \epsilonpsilon^2}{\int_{\Omega}ega e^{u_0+w} \mp
\sqrt{(\int_{\Omega}ega e^{u_0+w})^2-16\pi N \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2w}}}, \epsilonnd{equation} whenever $\big(\int_{\Omega}
e^{u_0+w}\big)^2-16\pi N \epsilonpsilon^2 \int_{\Omega} e^{2u_0+2w}\ge 0$.
The two possible choice of ``plus'' or ``minus'' sign in
\epsilonqref{cc} is another indication of multiple solutions for
\epsilonqref{2}. In \cite{T}, topological solutions are
characterized to satisfy \epsilonqref{cc} with the ``plus'' sign. Since
we are interested to non-topological solutions, it is natural to
restrict the attention to the case $c=c_-(w)$, reducing problem
(\ref{2}) to the following equation in ${\Omega}ega$:
\begin{equation} \lambdabdabel{3} \left\{ \begin{array}{rl} -\displaystyleelta w=& \displaystyle 4\pi
N\left(\frac{e^{u_0+w} }{\int_{\Omega}ega
e^{u_0+w}}-\frac{1}{|{\Omega}ega|}\right)
\\
&\displaystyle+\frac{64 \pi^2N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2w}}{(\int_{\Omega}ega e^{u_0+w}+\sqrt{(\int_{\Omega}ega
e^{u_0+w})^2-16\pi N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2w}})^2}\left(\frac{e^{u_0+w}}{\int_{\Omega}ega
e^{u_0+w}}-\frac{
e^{2u_0+2w}}{\int_{\Omega}ega e^{2u_0+2w}}\right) \\
\displaystyle \int_{\Omega}ega w=0. \epsilonnd{array}\right.\epsilonnd{equation}
\noindent Here and in the next sections, we first discuss the case $m=1$ in Theorem \ref{mainbb}. Assume that $p$ is present $n-$times in $\{p_1,\dots,p_N\}$, and denote by $p_j'$s the remaining points in the set $\{p_1,\dots,p_N\}$ with corresponding multiplicities $n_j'$s. Up to a translation, we are assuming that $p_j \in {\Omega}ega$
for $j=1,\dots,N$, a crucial property which will simplify the arguments below. Since the assumptions in Theorem \ref{mainbb} for the concentration at $p$ are just local properties, for simplicity in the notations let us simply consider the case $p=0$.
\noindent Since $e^{u_0}$ behaves like $|z|^{2n}$ as $z \to 0$, the local profile of $w$ near $0$ will be given in terms of solutions of the ``singular" Liouville equation:
\begin{equation}\lambdabdabel{starr}
-\displaystyleelta U= |z|^{2n}e^U.
\epsilonnd{equation}
Recall that by Liouville formula the function
$$\log \frac{8|F'|^2}{(1+|F|^2)^2}$$
does solve $-\displaystyleelta U=e^U$ in the set $\{F'\not= 0 \}$, for any
holomorphic map $F$. For entire solutions of \epsilonqref{starr} with finite-energy:
$\int_{\mathbb{R}^2} |z|^{2n}e^U<+\infty$, it is well known that
necessarily $F(z)=\frac{z^{n+1}-a}{\deltalta}$, and then all the
entire finite-energy solutions of \epsilonqref{starr} are classified as
$$U_{\deltalta,a}(z)=\log \frac{8 (n+1)^2 \deltalta^2}{(\deltalta^2 +|z^{n+1}-a|^2)^2},\quad \deltalta>0, \:a \in \mathbb{C}.$$
Moreover, we have that $\int_{\mathbb{R}^2}
|z|^{2n}e^{U_{\deltalta,a}}=8\pi(n+1)$. Since by construction the corresponding
$v=w+c_-(w)$ will satisfy
$$ \frac{1}{\epsilonpsilon^2} e^{u_0+v}\left(1-e^{u_0+v}\right) \rightharpoonup 8\pi (n+1) \deltalta_0$$
in the sense of measures, the balance condition
\begin{equation} \lambdabdabel{balance}
2\pi N=4 \pi(n+1)
\epsilonnd{equation}
is necessary in view of (\ref{ci0}).
\noindent Assume for simplicity $e^{u_0}=|z|^{2n}$. Since $ \int_{\Omega}ega
|z|^{2n} e^{U_{\deltalta,a}}\to 8\pi(n+1)$ as $\deltalta \to 0$, by
(\ref{balance}) we have the asymptotic matching of $-\displaystyleelta
U_{\deltalta,a}= |z|^{2n} e^{U_{\deltalta,a}}$ and $4\pi N
\frac{|z|^{2n} e^{U_{\deltalta,a}}}{\int_{\Omega}ega
|z|^{2n}e^{U_{\deltalta,a}}}$ as $\deltalta \to 0$. To correct
$U_{\deltalta,a}$ into a doubly-periodic function, we consider the
projection $PU_{\deltalta,a}$ of $U_{\deltalta,a}$ as the solution of
$$\left\{ \begin{array}{ll} -\displaystyleelta PU_{\deltalta,a}=-\displaystyleelta U_{\deltalta,a} +\frac{1}{|{\Omega}ega|} \int_{\Omega}ega \displaystyleelta U_{\deltalta,a}& \hbox{in }{\Omega}ega\\
\int_{\Omega}ega PU_{\deltalta,a}=0.& \epsilonnd{array} \right.$$ In this way,
we gain the constant term
$$\frac{1}{|{\Omega}ega|} \int_{\Omega}ega \displaystyleelta U_{\deltalta,a} =-\frac{1}{|{\Omega}ega|} \int_{\Omega}ega |z|^{2n} e^{U_{\deltalta,a}} \to -\frac{4\pi N}{|{\Omega}ega|} \qquad \hbox{as }\deltalta \to 0$$
in view of (\ref{balance}), and
we still need to check that the difference between $-\displaystyleelta
U_{\deltalta,a}= |z|^{2n}e^{U_{\deltalta,a}}$ and $4\pi N \frac{|z|^{2n}
e^{PU_{\deltalta,a}}}{\int_{\Omega}ega |z|^{2n} e^{PU_{\deltalta,a}}}$ is
asymptotically small. Thanks to an asymptotic expansion of
$PU_{\deltalta,a}$ in terms of $U_{\deltalta,a}$, we will see that the
difference is small (i.e. $PU_{\deltalta,a}$ is an approximating
function of (\ref{3})) but behaves at most like
$|z|^{2n}e^{U_{\deltalta,a}}O(|z|+\deltalta^2)$ which is not
sufficiently small. A first refinement of the ansatz via the
linear theory around $PU_{\deltalta,a}$ could improve the pointwise
error estimate into $|z|^{2n}e^{U_{\deltalta,a}}O(|z|^2+\deltalta^2)$,
which unfortunately is in general still not enough. Since there is
a strong mismatch between the dependence of $U_{\deltalta,a}$ on
$z^{n+1}$ and that of the error on $z$ (or even on $z^2$), we
should push such a procedure through several subsequent
refinements. Instead, we play directly with the inner parameters
present in the Liouville formula, for we have more flexibility in
the choice of $F(z)$ on bounded domains. Hereafter, let us fix an
open simply-connected domain $\tildelde {\Omega}ega$ so that
$\overline{{\Omega}ega}\subset \tildelde {\Omega}ega$ and $\tildelde {\Omega}ega \cap
\,\left(\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)=\{0\}$, and
set $\mathcal{M}(\overline{{\Omega}ega})=\{ \sigmama
\Big|_{\overline{{\Omega}ega}}: \sigmama\hbox{ meromorphic in }\tildelde
{\Omega}ega\}$. Let $\deltalta \in (0,+\infty)$, $a\in \mathbb{C}$ and
$\sigmama \in \mathcal{M}(\overline{{\Omega}ega})$ be a function which
vanishes only at $0$ with multiplicity $n+1$. Since $\log
|\sigmama'(z)|^2$ is harmonic in $\{ \sigmama' \not= 0\}$, the choice
$F(z)=\frac{\sigmama(z)-a}{\deltalta}$ yields to solutions
$$U_{\deltalta,a,\sigmama}(z)=\log \frac{8 \deltalta^2}{(\deltalta^2 +|\sigmama(z)-a|^2)^2}$$
of $-\displaystyleelta U= |\sigmama'(z)|^2 e^U$ in ${\Omega}ega \setminus
\{\hbox{poles of }\sigmama\}$, for $U_{\deltalta,a,\sigmama}$ is a smooth
function up to $\{\sigmama'=0\}$.
\noindent The guess is so to find a better local approximating function $PU_{\deltalta,a,\sigmama}$ for
a suitable choice of $\sigmama$, where $PU_{\deltalta,a,\sigmama}$ does
solve
\begin{equation} \lambdabdabel{lll}
\left\{ \begin{array}{ll} -\displaystyleelta PU_{\deltalta,a,\sigmama} =
|\sigmama'(z)|^2 e^{U_{\deltalta,a,\sigmama}} -\frac{1}{|{\Omega}ega|} \int_{\Omega}ega |\sigmama'(z)|^2 e^{U_{\deltalta,a,\sigmama}}& \hbox{in }{\Omega}ega\\
\int_{\Omega}ega PU_{\deltalta,a,\sigmama}=0.&
\epsilonnd{array} \right.
\epsilonnd{equation}
Notice that $PU_{\deltalta,a,\sigmama}$ is well-defined and smooth as
long as $\sigmama \in \mathcal{M}(\overline{{\Omega}ega})$, no matter
$\sigmama$ has poles or not.
\noindent Recall that $G(z,0)$ can be thought as a doubly-periodic function in $\mathbb{C}$ with singularities on the lattice vertices $\omega_1 \mathbb{Z}+\omega_2\mathbb{Z}$, and $H(z)=G(z,0)+\frac{1}{2\pi} \log |z|$ is then a smooth function in $2{\Omega}ega$ with $\displaystyleelta H=\frac{1}{|{\Omega}ega|}$. Since $2 {\Omega}ega$ is simply-connected, we can find an holomorphic function $H^*$ in $2 {\Omega}ega$ having the harmonic function $H-\frac{|z|^2}{4|{\Omega}ega|}$ as real part. Since $p_j \in {\Omega}ega$, take $\tildelde {\Omega}ega$ close to ${\Omega}ega$ so that $\tildelde {\Omega}ega-p_j \subset 2 {\Omega}ega$ for all $j=1,\dots, N$. The function
\begin{equation} \lambdabdabel{definitionH}
\mathcal{H}(z)= \prod_j (z-p_j)^{n_j} \hbox{exp} \left(
4\pi(n+1) H^*(z) -2\pi\sum_{j=1}^N
H^*(z-p_j)-\frac{\pi}{2|{\Omega}ega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|{\Omega}ega|}z \overline{\sum_{j=1}^N p_j}\right)
\epsilonnd{equation}
is holomorphic in $\tildelde {\Omega}ega$ with
\begin{equation} \lambdabdabel{keyrelationH}
|\mathcal{H}(z)|^2=\frac{1}{|z|^{2n}} e^{u_0+8\pi(n+1)
H(z)}=e^{4\pi(n+2)H(z)-4 \pi \sum_j n_j G(z,p_j)} \qquad \hbox{in
}\tildelde {\Omega}ega
\epsilonnd{equation}
in view of \epsilonqref{balance}. The meromorphic function $\mathcal{H}_0(z)=\frac{\mathcal{H}(z)}{z^{n+2}}$ does satisfy $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi(n+1)
G(z,0)}$ in $\tildelde {\Omega}ega$.
\begin{rem} \lambdabdabel{1149} For simplicity in the notations, we are considering the case $p=0$. When $p \not=0$, by assuming $\tildelde {\Omega}ega-p \subset 2{\Omega}ega$ the function
\begin{eqnarray*}
\mathcal{H}^p(z)&=& \prod_j (z-p_j)^{n_j} \hbox{exp} \left(
4\pi(n+1) H^*(z-p) +\frac{\pi(n+1)}{|{\Omega}ega|}|p|^2-\frac{2\pi(n+1)}{|{\Omega}ega|}z \bar p \right) \tildemes\\
&&\tildemes \hbox{exp} \left(-2\pi\sum_{j=1}^N
H^*(z-p_j)-\frac{\pi}{2|{\Omega}ega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|{\Omega}ega|}z \overline{\sum_{j=1}^N p_j}\right)
\epsilonnd{eqnarray*}
is holomorphic in $\tildelde {\Omega}ega$ with
$$|\mathcal{H}^p(z)|^2=\frac{1}{|z-p|^{2n}} e^{u_0+8\pi(n+1)
H(z-p)}=e^{4\pi(n+2)H(z-p)-4 \pi \sum_j n_j G(z,p_j)} \qquad \hbox{in
}\tildelde {\Omega}ega$$
in view of \epsilonqref{balance}. The meromorphic function $\mathcal{H}_0^p(z)=\frac{\mathcal{H}^p(z)}{(z-p)^{n+2}}$ does satisfy $|\mathcal{H}^p_0(z)|^2=e^{u_0+8\pi(n+1)
G(z,p)}$ in $\tildelde {\Omega}ega$.
\epsilonnd{rem}
\noindent Hereafter, for a meromorphic function $g$ in $\tildelde {\Omega}ega$ the notation $\int^z g(w) dw$ stands for the anti-derivative of $g(z)$, which is a well-defined meromorphic function in the simply-connected domain $\tildelde {\Omega}ega$ as soon as $g$ has zero residues at each of its poles. Since $\mathcal{H}(0)\not=0$ by \epsilonqref{keyrelationH}, we define
\begin{equation} \lambdabdabel{sigma0}
\sigmama_0(z)=-\left(\int^z \mathcal{H}_0(w) e^{-c_0
w^{n+1}} dw \right)^{-1}=-\left(\int^z \frac{\mathcal{H}(w) e^{-c_0
w^{n+1}}}{w^{n+2}} dw \right)^{-1},
\epsilonnd{equation}
where
\begin{equation} \lambdabdabel{c0}
c_0=\frac{1}{\mathcal{H}(0) (n+1)! }\frac{d^{n+1}
\mathcal{H}}{dz^{n+1}}(0)
\epsilonnd{equation}
guarantees that the residue of $\mathcal{H}_0(z) e^{-c_0
z^{n+1}}$ at $0$ vanishes. By construction $\sigmama_0
\in \mathcal{M}(\overline{{\Omega}ega})$ vanishes only at zero with
multiplicity $n+1$, as needed, with
\begin{equation} \lambdabdabel{0942}
\lim_{z \to 0}
\frac{z^{n+1}}{\sigmama_0(z)}=\frac{\mathcal{H}(0)}{n+1},
\epsilonnd{equation}
and does solve
\begin{equation} \lambdabdabel{eq sigma0}
|\sigmama_0'(z)|^2= |\sigmama_0(z)|^4 e^{u_0+8\pi(n+1)G(z,0)} e^{-2
\re [c_0 z^{n+1}]}
\epsilonnd{equation}
in view of \epsilonqref{keyrelationH}.
\noindent Let $\sigmama \in \mathcal{M}(\overline{{\Omega}ega})$ be a function which vanishes only at zero with multiplicity $n+1$. For $a \in \mathbb{C}$ small there exist $a_0,\dots,a_n$ so that $\{z \in \tildelde {\Omega}ega: \, \sigmama(z)=a \}=\{a_0,\dots,a_n\}$ (distinct points when $a \not=0$). For $a$ small the function
\begin{eqnarray}
\mathcal{H}_{a,\sigmama}(z) &=& \prod_j (z-p_j)^{n_j} \hbox{exp}\left( 4\pi \sum_{k=0}^n H^*(z-a_k)-\frac{2\pi}{|{\Omega}ega|}z \overline{\sum_{k=0}^n a_k}-2\pi\sum_{j=1}^N H^*(z-p_j)\right. \lambdabdabel{Hasigma} \\
&&\left.-\frac{\pi}{2|{\Omega}ega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|{\Omega}ega|}z \overline{\sum_{j=1}^N p_j}\right)
\nonumber
\epsilonnd{eqnarray}
is holomorphic in $\tildelde {\Omega}ega$ with
\begin{equation} \lambdabdabel{keyrelation}
|\mathcal{H}_{a,\sigmama}(z)|^2=\frac{1}{|z|^{2n}} e^{u_0+8\pi
\sum_{k=0}^n H(z-a_k)-\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2}
\qquad \hbox{in }\tildelde {\Omega}ega
\epsilonnd{equation}
in view of \epsilonqref{balance}. The advantage in our construction of
$\mathcal{H}_{a,\sigmama}$, which might be carried over in a
simpler and more direct way, is the holomorphic/anti-holomorphic
dependence in the $a_k$'s as well as in $z$, a crucial property as
we will see in Appendix A. When $a=0$, then $a_0=\dots=a_n=0$ and
$\mathcal{H}=\mathcal{H}_{0,\sigmama}$.
\noindent Endowed with the norm $\|\sigmama\|:=\| \frac{\sigmama}{\sigmama_0}\|_{\infty,\tildelde {\Omega}ega}$, the set $\mathcal{M}'(\overline{{\Omega}ega})=\{ \sigmama \in \mathcal{M}(\overline{{\Omega}ega}):\,\|\sigmama\|<\infty\}$ is a Banach space, and let $\mathcal{B}_r$ be the closed ball centered at $\sigmama_0$ and radius $r>0$, i.e.
\begin{equation} \lambdabdabel{setB}
\mathcal{B}_r=\bigg\{ \sigmama \in
\mathcal{M}({\overline{{\Omega}ega}}):\:\Big\|
\frac{\sigmama}{\sigmama_0}-1\Big\|_{\infty,\tildelde {\Omega}ega} \leq r
\bigg\}.
\epsilonnd{equation}
For $a\not=0$ and $r$ small, the aim is to find a solution
$\sigmama_a \in \mathcal{B}_r$ of
$$\sigmama(z)= -\left[ \int^z \left(\frac{\sigmama(w)-a}{\prod_{k=0}^n (w-a_k)} \frac{w^{n+1}}{\sigmama(w)} \right)^2 \frac{\mathcal{H}_{a,\sigmama}(w)}{w^{n+2}} e^{-c_{a,\sigmama}w^{n+1}}dw\right]^{-1}$$
for a suitable coefficient $c_{a,\sigmama}$. To be more precise,
letting
$$g_{a,\sigmama}(z)=\frac{\sigmama(z)-a}{\prod_{k=0}^{n}(z-a_k)}$$
for $|a|<\rho$ and $\sigmama \in \mathcal{B}_r$, by Lemma
\ref{gomme} we have that $g_{a,\sigmama} \in
\mathcal{M}(\overline{{\Omega}ega})$ never vanishes, and the problem
above gets re-written as
\begin{equation} \lambdabdabel{sigmaa}
\sigmama(z)= -\left[ \int^z
\frac{g^2_{a,\sigmama}(w)}{g^2_{0,\sigmama}(w)}
\frac{\mathcal{H}_{a,\sigmama}(w)}{w^{n+2}}
e^{-c_{a,\sigmama}w^{n+1}}dw\right]^{-1}.
\epsilonnd{equation}
The choice
\begin{equation} \lambdabdabel{ca}
c_{a,\sigmama}=\frac{1}{(n+1)!}\frac{d^{n+1}}{dz^{n+1}}\left[
\frac{g^2_{a,\sigmama}(z) g^2_{0,\sigmama}(0)}{g^2_{a,\sigmama}(0)
g^2_{0,\sigmama}(z)}
\frac{\mathcal{H}_{a,\sigmama}(z)}{\mathcal{H}_{a,\sigmama}(0) }
\right](0)
\epsilonnd{equation}
lets vanish the residue of the integrand function in
\epsilonqref{sigmaa} making the R.H.S. well-defined. Since $\sigmama_a \in
\mathcal{B}_r$, the function $\sigmama_a$ vanishes only at zero with
multiplicity $n+1$, and satisfies
\begin{equation} \lambdabdabel{eq sigmaa}
|\sigmama_a'(z)|^2= |\sigmama_a(z)-a|^4 \hbox{exp}\left(u_0+8\pi
\sum_{k=0}^n G(z,a_k)-\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n
|a_k|^2-2\re [c_{a,\sigmama_a}z^{n+1}]\right)
\epsilonnd{equation}
in view of \epsilonqref{keyrelation}. The resolution of problem
\epsilonqref{sigmaa}-\epsilonqref{ca} will be addressed in Appendix A.
\noindent We have the following expansion for $PU_{\delta,a,\sigmama}$ as
$\delta\to0$:
\begin{lem}\lambdabdabel{expPU} There holds
\begin{eqnarray} \lambdabdabel{1138}
PU_{\delta,a,\sigmama}=U_{\delta,a,\sigmama}-\log (8 \deltalta^2)+4 \log |g_{a,\sigmama}|+8\pi \sum_{k=0}^n H(z-a_k)+\Theta_{\deltalta,a,\sigmama}+2 \deltalta^2 f_{a,\sigmama}+O(\delta^4)
\epsilonnd{eqnarray}
in $C(\overline{{\Omega}ega})$, uniformly for $|a|< \rho$ and $\sigmama \in \mathcal{B}_r$, where
$$\Theta_{\delta,a,\sigmama}=-\frac{1}{|{\Omega}ega|}\int_{\Omega} \log {|\sigmama(z)-a|^4\over
(\delta^2+|\sigmama(z)-a|^2)^2}$$ and $f_{a,\sigmama}$ is defined in
\epsilonqref{FaQ}. In particular, there holds
$$PU_{\delta,a,\sigmama}=8\pi \sum_{k=0}^n G(z,a_k)+\Theta_{\deltalta,a,\sigmama}+2\deltalta^2 \left(f_{a,\sigmama}-{1\over |\sigmama(z)-a|^2}\right)+O(\deltalta^4)$$
in $C_{\text{loc}}(\overline{{\Omega}ega} \setminus\{0 \})$, uniformly for $|a|< \rho$ and $\sigmama \in \mathcal{B}_r$.
\epsilonnd{lem}
\begin{proof}
Define
$$r_{\delta,a,\sigmama}=PU_{\delta,a,\sigmama}-U_{\delta,a,\sigmama}+\log
(8 \deltalta^2)-4 \log |g_{a,\sigmama}|-8\pi \sum_{k=0}^n H(z-a_k).$$ The function $U_{\delta,a,\sigmama}$ does satisfy $-\displaystyleelta U_{\delta,a,\sigmama}=|\sigmama'(z)|^2 e^{U_{\delta,a,\sigmama}}$ just in ${\Omega}ega \setminus \{\hbox{poles of }\sigmama\}$. At the same time, the function $-4\log |g_{a,\sigmama}|$ is harmonic in ${\Omega}ega \setminus \{\hbox{poles of }\sigmama\}$, and has exactly the same singular behavior of $U_{\delta,a,\sigmama}$ near each pole of $\sigmama$. It means that
\begin{equation} \lambdabdabel{yth}
-\displaystyleelta \left[U_{\deltalta,a,\sigmama}+4\log |g_{a,\sigmama}|\right]=|\sigmama'(z)|^2 e^{U_{\delta,a,\sigmama}}
\epsilonnd{equation}
does hold in the whole ${\Omega}ega$. Since $\displaystyleelta H=\frac{1}{|{\Omega}ega|}$, by (\ref{lll}) and (\ref{yth}) we get that
\begin{eqnarray*}
-\displaystyleelta r_{\delta,a,\sigmama}= \frac{1}{|{\Omega}ega|}\left[ 8\pi(n+1)-\int_{\Omega}ega |\sigmama'(z)|^2 e^{U_{\delta,a,\sigmama}}\right] .
\epsilonnd{eqnarray*}
By the Green's representation formula we have that
\begin{eqnarray} \lambdabdabel{repr}
r_{\delta,a,\sigmama}(z)=\frac{1}{|{\Omega}ega|}\int_{\Omega}ega r_{\delta,a,\sigmama}+\int_{\partial {\Omega}ega}[\partial_\nu r_{\delta,a,\sigmama}(w) G(w,z)-r_{\delta,a,\sigmama}(w) \partial_\nu G(w,z)]ds(w),
\epsilonnd{eqnarray}
where $\nu$ is the unit outward normal of $\partial {\Omega}ega$ and $ds(w)$ is the line integral element. Since as $\deltalta \to 0$ there holds
$$r_{\delta,a,\sigmama}(w)=PU_{\delta,a,\sigmama}(w)-8\pi \sum_{k=0}^n G(w,a_k) +2 \frac{\deltalta^2}{|\sigmama(w)-a|^2}+O(\deltalta^4)$$
in $C^1(\partial {\Omega}ega)$ uniformly in $|a|< \rho$ and $\sigmama \in \mathcal{B}_r$, by double-periodicity of $PU_{\delta,a,\sigmama}-8\pi \displaystyle \sum_{k=0}^n G(\cdot,a_k)$ we get that
\begin{eqnarray} \lambdabdabel{repr1}
\int_{\partial {\Omega}ega}[\partial_\nu r_{\delta,a,\sigmama}(w) G(w,z)-r_{\delta,a,\sigmama}(w) \partial_\nu G(w,z)]ds(w)=
2 \deltalta^2 f_{a,\sigmama}(z) +O(\deltalta^4)
\epsilonnd{eqnarray}
in $C(\bar {\Omega}ega)$, where
\begin{eqnarray}\lambdabdabel{FaQ}
f_{a,\sigmama}(z)=\int_{\partial {\Omega}ega}\Big[\partial_\nu \frac{1}{|\sigmama(w)-a|^2} G(w,z)-\frac{1}{|\sigmama(w)-a|^2} \partial_\nu G(w,z)\Big]ds(w).\epsilonnd{eqnarray}
Inserting \epsilonqref{repr1} into \epsilonqref{repr} we get that
\begin{eqnarray} \lambdabdabel{repr2}
r_{\delta,a,\sigmama}(z)=\Theta_{\delta,a,\sigmama}+2 \deltalta^2 f_{a,\sigmama}(z) +O(\deltalta^4)
\epsilonnd{eqnarray}
in $C(\overline{{\Omega}ega})$ uniformly in $|a|< \rho$ and $\sigmama \in \mathcal{B}_r$, where
$$\Theta_{\delta,a,\sigmama}:=\frac{1}{|{\Omega}ega|}\int_{\Omega}ega r_{\delta,a,\sigmama}=-\frac{1}{|{\Omega}ega|}\int_{\Omega} \log {|\sigmama(z)-a|^4\over
(\delta^2+|\sigmama(z)-a|^2)^2}.$$
The estimate \epsilonqref{repr2} yields to the desired expansion for $PU_{\deltalta,a,\sigmama}$ as $\deltalta \to 0$. \qed \epsilonnd{proof}
\noindent Letting $\sigmama_a \in \mathcal{B}_r$ be the solution of \epsilonqref{sigmaa}-\epsilonqref{ca}, we build up the correct approximating function as $W=PU_{\delta,a,\sigmama_a}$. We need to control the approximation rate of $W$ for $\deltalta$ and $\epsilon$ small enough, by estimating the error term
\begin{eqnarray}\lambdabdabel{R}
R&=&\displaystyleelta W+4\pi N\left(\frac{e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{1}{|{\Omega}ega|}\right)\\
&&+ \frac{64 \pi^2N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{\left(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2W}}\right)^2}\left(\frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-\frac{
e^{2u_0+2W}}{\int_{\Omega}ega e^{2u_0+2W}}\right).\nonumber
\epsilonnd{eqnarray}
In order to simplify the notations, we set $U_{\deltalta,a}=U_{\deltalta,a,\sigmama_a}$, $c_a=c_{a,\sigmama_a}$, $\Theta_{\deltalta,a}=\Theta_{\deltalta,a,\sigmama_a}$, $f_a=f_{a,\sigmama_a}$, and omit the
subscript $a$ in $\sigmama_a$. We have the following crucial result.
\begin{thm}\lambdabdabel{estrr01550} Let $|a|<\frac{\rho}{2}$ and set
\begin{eqnarray} \lambdabdabel{rateeps}
\epsilonta=\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}} \max\Big\{1, \frac{|a|}{\deltalta}\Big\}^{\frac{2n}{n+1}}.
\epsilonnd{eqnarray}
The following expansions do hold
\begin{eqnarray}
&&\displaystyleelta W+4\pi N\left( \frac{ e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{1}{|{\Omega}ega|}\right) \nonumber\\
&&=|\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1\right] \nonumber \\
&&\quad +|\sigmama'(z)|^2 e^{U_{\deltalta,a}}O(\deltalta^2 |z| +\deltalta^2|a|^{\frac{1}{n+1}}+\deltalta^2|c_a|+\deltalta^{\frac{2n+3}{n+1}})+O(\deltalta^2)
\lambdabdabel{imp}
\epsilonnd{eqnarray}
and
\begin{eqnarray}
&&\ds \frac{64 \pi^2 N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega e^{2u_0+2W}})^2} \left(\frac{
e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_{\Omega}ega e^{2u_0+2W}}\right) \nonumber \\
&&= |\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[\frac{8(n+1)^2\epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}}
\deltalta^{\frac{2}{n+1}}} E_{a,\deltalta}-\epsilon^2 |\sigmama'(z)|^2
e^{U_{\deltalta,a}}\right] \left[1+O(|c_a||z|^{n+1}+\epsilonta)+o(1)
\right] \lambdabdabel{eps4}
\epsilonnd{eqnarray}
as $\epsilon, \deltalta \to 0$, where $\alphaha_a$, $F_a$, $G_a$, $D_a$, $E_{a,\deltalta}$ are given in
\epsilonqref{alpha0}, \epsilonqref{FG}, \epsilonqref{Da}, \epsilonqref{Eadelta},
respectively.
\epsilonnd{thm}
\begin{proof} Recall that \epsilonqref{sigmaa} implies the validity of \epsilonqref{eq sigmaa}, which, combined with Lemma \ref{expPU}, yields to the following crucial estimate:
\begin{eqnarray} \lambdabdabel{cinema}
W=U_{\deltalta,a}-\log (8 \deltalta^2)+\log |\sigmama'(z)|^2-u_0+\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+2\re [c_a z^{n+1}]+\Theta_{\deltalta,a}+2\deltalta^2 f_a+O(\deltalta^4)
\epsilonnd{eqnarray}
in $C(\overline{{\Omega}ega})$ as $\deltalta \to 0$, uniformly for $|a|< \rho$. Since by Lemma \ref{gomme} $\sigmama=q^{n+1}$ in $\sigmama^{-1}(B_\rho(0))$, through the change of variables
$y=q(z)$ in $\sigmama^{-1}(B_\rho(0))=q^{-1}(B_{\rho^{\frac{1}{n+1}}}(0))$, by (\ref{cinema}) we have that
\begin{eqnarray}
&& \frac{8 \deltalta^2} {e^{\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigmama^{-1}(B_{\rho}(0))} e^{u_0+W}
= \int_{q^{-1}(B_{\rho^{\frac{1}{n+1}}}(0))}
|\sigmama'(z)|^2 e^{U_{\delta,a}+2\re[c_a z^{n+1}] +O(\deltalta^2|z|+\deltalta^4)} \nonumber \\
&&= \int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{8(n+1)^2 \deltalta^2 |y|^{2n}}{(\deltalta^2+|y^{n+1}-a|^2)^2}
e^{2\re[c_a (q^{-1}(y))^{n+1}] +O(\deltalta^2 |y|+\deltalta^4)}. \lambdabdabel{1045}
\epsilonnd{eqnarray}
Since $q^{-1}(y)\sim y $ at $y=0$, the following Taylor expansion does hold
\begin{equation} \lambdabdabel{keyexp}
e^{c_a(q^{-1}(y))^{n+1}}=1+c_a y^{n+1} \sum_{k=0}^{+\infty} \alphaha_a^k y^k
\epsilonnd{equation}
in $B_{\rho^{\frac{1}{n+1}}}(0)$, where the coefficients $\alphaha_a^k$ depend on $a$ through $\sigmama=\sigmama_a$. In particular, we have that $\alphaha_a:=\alphaha_a^0$ takes the form
\begin{eqnarray} \lambdabdabel{alpha0}
\alphaha_a=\displaystyle \lim_{z \to 0}\frac{z^{n+1}}{\sigmama(z)} \not=0.
\epsilonnd{eqnarray}
By \epsilonqref{keyexp} we then deduce that
\begin{equation} \lambdabdabel{keyexp1}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}=\big|e^{c_a (q^{-1}(y))^{n+1}}\big|^2=1+2\re\bigg[c_a y^{n+1} \sum_{k=0}^{+\infty} \alphaha_a^k y^k\bigg]+|c_a|^2 |y|^{2n+2}\sum_{k,s=0}^{+\infty} \alphaha_a^k \overline{\alphaha}_a^s y^k \overline{y}^s.
\epsilonnd{equation}
Since
$$\sum_{j=0}^n [e^{i\frac{2\pi}{n+1}j}]^k=\sum_{j=0}^n e^{i\frac{2\pi}{n+1}j}=0$$
for all integer $k\notin (n+1)\mathbb{N}$, by the change of variables $y \to e^{i\frac{2\pi}{n+1}j} y$ we have that
\begin{eqnarray}
\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{|y|^m y^k}{(\deltalta^2+|y^{n+1}-a|^2)^2}
&=&\sum_{j=0}^n \int_{B_{\rho^{\frac{1}{n+1}}}(0)\cap C_j} \frac{|y|^m y^k}{(\deltalta^2+|y^{n+1}-a|^2)^2} \nonumber\\
&=& \int_{B_{\rho^{\frac{1}{n+1}}}(0)\cap C_0} \frac{|y|^m y^k}{(\deltalta^2+|y^{n+1}-a|^2)^2} \sum_{j=0}^n [e^{i\frac{2\pi}{n+1}j}]^k=0 \lambdabdabel{symmetryint}
\epsilonnd{eqnarray}
for all $m\geq 0$ and integer $k \notin (n+1)\mathbb{N}$, where $C_j$ is the sector of the plane between the angles $e^{i\frac{2\pi}{n+1}j}$ and $e^{i\frac{2\pi}{n+1}(j+1)}$. Formula \epsilonqref{symmetryint} tells us that many terms of the expansion \epsilonqref{keyexp1} will give no contribution when inserted in an integral formula like \epsilonqref{1045}. Using the notation $\dots$ to denote such terms, we can rewrite \epsilonqref{keyexp1} as
\begin{eqnarray} \lambdabdabel{keyexp2}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}&=&1+2\re\bigg[c_a \sum_{k=0}^{+\infty} \alphaha_a^{k(n+1)} y^{(k+1)(n+1)}\bigg]+|c_a|^2 |y|^{2n+2} \sum_{k=0}^{+\infty} |\alphaha_a^k|^2 |y|^{2k} \\
&&+2|c_a|^2 |y|^{2n+2}\re \bigg[\sum_{k=0}^{+\infty}\sum_{m=1}^{+\infty} \overline{\alphaha}_a^k \alphaha_a^{k+m(n+1)} |y|^{2k} y^{m(n+1)}\bigg]+\dots \nonumber
\epsilonnd{eqnarray}
Setting
\begin{equation} \lambdabdabel{FG}
F_a(y)=\sum_{k=0}^{+\infty} \alphaha_a^{k(n+1)} y^{k+1},\quad G_a(y)=|y|^2 \left[2 \sum_{k=0}^{+\infty}\sum_{m=1}^{+\infty} \overline{\alphaha}_a^k \alphaha_a^{k+m(n+1)} |y|^{\frac{2k}{n+1}} y^m+\sum_{k=0}^{+\infty} |\alphaha_a^k|^2 |y|^{\frac{2k}{n+1}} \right],
\epsilonnd{equation}
through the change of variables $y \to y^{n+1}$ we can re-write \epsilonqref{1045} as
\begin{eqnarray}
&&\hspace{-1.1cm} \frac{8 \deltalta^2}{(n+1) e^{\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigmama^{-1}(B_\rho(0))} e^{u_0+W} \nonumber\\
&&\hspace{-1.1cm}= \int_{B_\rho(0)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2}
\left(1+\re[2 c_a F_a(y)+|c_a|^2 G_a(y)]+O(\deltalta^2 |y|^{\frac{1}{n+1}}+\deltalta^4)\right)\nonumber\\
&&\hspace{-1.1cm} = 8\pi-\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{8\deltalta^2}{|y|^4}+
\int_{B_\rho(0)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2}
\re[2 c_a F_a(y)+|c_a|^2 G_a(y)]+O(\deltalta^2|a|^{\frac{1}{n+1}}+\deltalta^{\frac{2n+3}{n+1}}). \lambdabdabel{1046}
\epsilonnd{eqnarray}
Since $|a|<\frac{\rho}{2}$ and $F$ is an holomorphic function in $B_{\frac{\rho}{2}}(a) \subset B_\rho(0)$, we can expand $F_a$ in a power series around $y=a$:
\begin{eqnarray} \lambdabdabel{expF}
F_a(y)=\sum_{k=0}^\infty \frac{F_a^{(k)}(a)}{k!} (y-a)^k,
\epsilonnd{eqnarray}
and then get
\begin{eqnarray}
2 \int_{B_\rho(0)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2} \re[c_a F_a(y)]&=&
2 \int_{B_{\frac{\rho}{2}}(a)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2} \re[c_a F_a(y)]+O(\deltalta^2|c_a|)\nonumber\\
&=& 16\pi \re[c_a F_a(a)] +O(\deltalta^2|c_a|)
\lambdabdabel{1047}
\epsilonnd{eqnarray}
in view of
$$\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)^k}{(\deltalta^2+|y-a|^2)^2}=0$$
for all integer $k\geq 1$. The map $\re G_a$ is just $C^{2+\frac{2}{n+1}}(B_\rho(0))$ and can be expanded up to second order in $y=a$:
\begin{eqnarray}\lambdabdabel{expG}
\re G_a(y)=\re G_a(a)+\lambdabdangle\nabla \re G_a(a),y-a\rangle+\frac{1}{2}\lambdabdangle D^2 \re G_a(a) (y-a),y-a\rangle+O(|y-a|^{\frac{2(n+2)}{n+1}})
\epsilonnd{eqnarray}
for $y \in B_{\frac{\rho}{2}}(a)$, yielding to
\begin{eqnarray}
&& |c_a|^2 \int_{B_\rho(0)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2} \re G_a(y)= |c_a|^2 \int_{B_{\frac{\rho}{2}}(a)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2} \re G_a(y)+O(\deltalta^2|c_a|^2) \nonumber \\
&&=8\pi |c_a|^2 \re G_a(a)+\frac{|c_a|^2}{4} \displaystyleelta \re G_a(a) \int_{B_{\frac{\rho}{2}}(a)} \frac{8\deltalta^2}{(\deltalta^2+|y-a|^2)^2} |y-a|^2+O(\deltalta^2|c_a|^2) \nonumber\\
&&=8\pi |c_a|^2 \re G_a(a)+4\pi |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta} +O(\deltalta^2|c_a|^2)
\lambdabdabel{1048}
\epsilonnd{eqnarray}
in view of
\begin{eqnarray*}
&&\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_1}{(\deltalta^2+|y-a|^2)^2}=\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_2}{(\deltalta^2+|y-a|^2)^2}=
\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_1(y-a_2)}{(\deltalta^2+|y-a|^2)^2}=0\\
&&\int_{B_{\frac{\rho}{2}} (a)} \frac{(y-a)_1^2}{(\deltalta^2+|y-a|^2)^2}=\int_{B_{\frac{\rho}{2}} (a)} \frac{(y-a)_2^2}{(\deltalta^2+|y-a|^2)^2}=
\frac{1}{2}\int_{B_{\frac{\rho}{2}}(a)} \frac{|y-a|^2}{(\deltalta^2+|y-a|^2)^2}.
\epsilonnd{eqnarray*}
By inserting \epsilonqref{1047}, \epsilonqref{1048} into \epsilonqref{1046} we get that
\begin{eqnarray}
&&\frac{8\deltalta^2}{(n+1) e^{\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigmama^{-1}(B_\rho(0))} e^{u_0+W}\nonumber\\
&&= 8\pi-\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{8\deltalta^2}{|y|^4}
+16\pi \re[c_a F_a(a)] +8\pi |c_a|^2 \re G_a(a)+4 \pi |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta} \nonumber \\
&&+O(\deltalta^2|a|^{\frac{1}{n+1}}+\deltalta^2|c_a|+\deltalta^{\frac{2n+3}{n+1}}).\lambdabdabel{1049}
\epsilonnd{eqnarray}
By Lemma \ref{expPU}, \epsilonqref{1049} and Lemma \ref{gomme} we get that
\begin{eqnarray}
&&\frac{ \deltalta^2}{\pi (n+1) e^{\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+\Theta_{\deltalta,a}+2\deltalta^2f_{a}(0)}}\int_{\Omega}ega e^{u_0+W} = 1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a) \nonumber \\
&&+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a+O(\deltalta^2|a|^{\frac{1}{n+1}}+\deltalta^2|c_a|+\deltalta^{\frac{2n+3}{n+1}}), \lambdabdabel{const}
\epsilonnd{eqnarray}
where
\begin{equation} \lambdabdabel{Da}
\pi D_a=\int_{{\Omega}ega \setminus \sigmama^{-1}(B_\rho(0))} e^{u_0+8\pi \sum_{k=0}^n G(z,a_k)-\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2} -\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{n+1}{|y|^4}.
\epsilonnd{equation}
In view of (\ref{balance}) and $\int_{\Omega}ega |\sigmama'(z)|^2 e^{U_{\deltalta,a}}=8\pi(n+1)+O(\deltalta^2)$, by (\ref{cinema}) and (\ref{const}) we have that
\begin{eqnarray*}
&&\displaystyleelta W+4\pi N\left( \frac{ e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{1}{|{\Omega}ega|}\right) \\
&&=|\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[4\pi N \frac{e^{2\re[c_a z^{n+1}]+O(\deltalta^2 |z|+\deltalta^4)}
}{8 \deltalta^2 e^{-\frac{2\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2-\Theta_{\deltalta,a}-2 \deltalta^2 f_a
(0)} \int_{\Omega}ega e^{u_0+W}}-1 \right] +\frac{1}{|{\Omega}ega|}\left(\int_{\Omega}ega |\sigmama'(z)|^2
e^{U_{\deltalta,a}}-4\pi N \right)\\
&&=|\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1\right] \\
&& +|\sigmama'(z)|^2 e^{U_{\deltalta,a}}O(\deltalta^2 |z| +\deltalta^2|a|^{\frac{1}{n+1}}+\deltalta^2|c_a|+\deltalta^{\frac{2n+3}{n+1}})+O(\deltalta^2)
\epsilonnd{eqnarray*}
as $\delta \to 0$, yielding to the validity of \epsilonqref{imp}.
\noindent Introducing the notation $B(w)=16\pi N (\int_{\Omega}
e^{2u_0+2w})(\int_{\Omega} e^{u_0+w})^{-2}$, we can write the following
expansion
\begin{equation} \lambdabdabel{BWW}
\frac{16 \pi N \int_{\Omega}ega e^{2u_0+2W}}{(\int_{\Omega}ega
e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2W}})^2}={B(W) \over 4}+O(\epsilon^2 B^2(W)).
\epsilonnd{equation}
Arguing as for (\ref{const}), the change of variables $y=\sigmama(z)$ yields to
\begin{eqnarray}
&& {64 \deltalta^{4+\frac{2}{n+1}}\over
e^{\frac{4\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+2\Theta_{\deltalta,a}}}\int_{\Omega}ega e^{2u_0+2W}=\deltalta^{\frac{2}{n+1}}\int_{\sigmama^{-1} (B_\rho(0))} |\sigmama'(z)|^4 e^{2U_{\deltalta,a} +O(|c_a||z|^{n+1}+\deltalta^2)}+O(\deltalta^{4+\frac{2}{n+1}})\nonumber \\
&&= 64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \int_{B_\rho (0)} \frac{\deltalta^{4+\frac{2}{n+1}} |y|^{\frac{2n}{n+1}}}{(\deltalta^2+|y-a|^2)^4}\left(1+O(|c_a||y|+\deltalta^2+|y|^{\frac{1}{n+1}})\right)+O(\deltalta^{4+\frac{2}{n+1}}) \nonumber \\
&&= 64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \int_{B_\rho (0)} \frac{\deltalta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\deltalta^2+|y|^2)^4}\left(1+O(\deltalta^2+|y|^{\frac{1}{n+1}}+|a|^{\frac{1}{n+1}})\right)+O(\deltalta^{4+\frac{2}{n+1}}) \lambdabdabel{const1}
\epsilonnd{eqnarray}
in view of
\begin{eqnarray} \lambdabdabel{1852}
|\sigmama'(z)|^2=(n+1)^2 |\alphaha_a|^{-2} |z|^{2n}(1+O(|z|))=(n+1)^2 |\alphaha_a|^{-\frac{2}{n+1}} |\sigmama(z)|^{\frac{2n}{n+1}}(1+O(|\sigmama(z)|^{\frac{1}{n+1}})),
\epsilonnd{eqnarray}
where $\alphaha_a$ is given by \epsilonqref{alpha0}. We have that
$$\int_{B_\rho (0)} \frac{\deltalta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\deltalta^2+|y|^2)^4}=
\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4} +O(\delta^{4+\frac{2}{n+1}}) $$
if $|a|=O(\deltalta)$ and
$$\int_{B_\rho (0)} \frac{\deltalta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\deltalta^2+|y|^2)^4}=
\Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}} \int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^4}\left[1 +O\Big(\frac{\deltalta}{|a|}+\deltalta^6\Big)\right]$$
if $|a|>>\deltalta$, where in the latter we have used the inequality:
$$|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+O(|a|^{\frac{n-1}{n+1}}|y|+|y|^{\frac{2n}{n+1}}).$$
Setting
\begin{eqnarray} \lambdabdabel{Eadelta}
E_{a,\deltalta}:= \left\{\begin{array}{ll}
\ds\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4} &\hbox{if }|a|=O(\deltalta)\\ \\
\ds\frac{\pi}{3} \Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}} &\hbox{if }|a|>>\deltalta, \epsilonnd{array} \right.
\epsilonnd{eqnarray}
by (\ref{const1}) we get that
\begin{eqnarray}
{64 \deltalta^{4+\frac{2}{n+1}}\over
e^{\frac{4\pi}{|{\Omega}ega|} \sum_{k=0}^n |a_k|^2+2\Theta_{\deltalta,a}}}\int_{\Omega}ega e^{2u_0+2W}= 64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} (1+o(1)) E_{a,\deltalta}.\lambdabdabel{const1517}
\epsilonnd{eqnarray}
Since by a combination of (\ref{const}) and (\ref{const1517}) for $B(W)$ we have
that
\begin{equation} \lambdabdabel{BW}
B(W)=32 \frac{(n+1)^2}{ \pi \deltalta^{\frac{2}{n+1}}}|\alphaha_a|^{-\frac{2}{n+1}}(1+o(1))E_{a,\deltalta}
\epsilonnd{equation}
in view of (\ref{balance}), by \epsilonqref{BWW} and \epsilonqref{BW} we get that
\begin{eqnarray} \lambdabdabel{eps0}
\frac{16 \pi N \int_{\Omega}ega e^{2u_0+2W}}{(\int_{\Omega}ega
e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2W}})^2} =8 \frac{(n+1)^2}{ \pi \deltalta^{\frac{2}{n+1}}}|\alphaha_a|^{-\frac{2}{n+1}} (1+o(1)+O(\epsilonta)) E_{a,\deltalta},
\epsilonnd{eqnarray}
where $\epsilonta$ is given by \epsilonqref{rateeps}. As we have already seen in deriving (\ref{imp}), by (\ref{cinema}) we have that
\begin{eqnarray}
4\pi N\frac{ e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}=|\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[1+O(|c_a| |z|^{n+1})+O(|c_a||a|+\deltalta^2 |\log \deltalta| ) \right],
\lambdabdabel{eps1}
\epsilonnd{eqnarray}
and in a similar way one can show that
\begin{eqnarray}
{64(n+1)^3\over \deltalta^{\frac{2}{n+1}}} |\alphaha_a|^{-\frac{2}{n+1}} \frac{e^{2u_0+2W} }{\int_{\Omega}ega e^{2u_0+2W} }
E_{a,\deltalta}= |\sigmama'(z)|^4 e^{2U_{\deltalta,a}}\left[1+ O(|c_a||z|^{n+1})+o(1)\right]
\lambdabdabel{eps2}
\epsilonnd{eqnarray}
in view of (\ref{const1517}). In conclusion, by (\ref{eps0})-(\ref{eps2}) we have for the
$\epsilonpsilon^2-$term in $R$ that
\begin{eqnarray*}
&&\ds \frac{64 \pi^2 N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega e^{2u_0+2W}})^2} \left(\frac{
e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_{\Omega}ega e^{2u_0+2W}}\right) \\
&&= |\sigmama'(z)|^2 e^{U_{\deltalta,a}}
\left[\frac{8(n+1)^2\epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}} \deltalta^{\frac{2}{n+1}}}
E_{a,\deltalta}-\epsilon^2
|\sigmama'(z)|^2 e^{U_{\deltalta,a}}\right] \left[1+O(|c_a||z|^{n+1}+\epsilonta)+o(1) \right]
\epsilonnd{eqnarray*}
in view of (\ref{balance}), yielding to the validity of \epsilonqref{eps4}. This completes the proof. \qed
\epsilonnd{proof}
\noindent Let us introduce the following weighted norm
\begin{equation}\lambdabdabel{wn}
\| h \|_*=\sup_{z\in {\Omega}}
\frac{(\deltalta^2+|\sigmama(z)-a|^2)^{1+\frac{\gammama}{2}}}{\deltalta^{\gammama} (|\sigmama'(z)|^2+\deltalta^{\frac{2n}{n+1}})}\; |h(z)|
\epsilonnd{equation}
for any $h\in L^\infty({\Omega})$, where $0<\gammama<1$ is a small fixed
constant. We have that
\begin{cor}\lambdabdabel{estrr0cor}
There exist positive constants $\deltalta_0$, $\epsilon_0$ and $C_0$ such
that
\begin{equation}\lambdabdabel{ere}
\|R\|_*\le C_0 \left(\deltalta |c_a|+\deltalta^{2-\gammama} +\deltalta^{\frac{2}{n+1}-\gammama} |a|^{2+\gammama}+|c_a||a|^{\frac{n+2}{n+1}}+\epsilonta+\epsilonta^2\right)
\epsilonnd{equation}
for any $\deltalta \in (0,\deltalta_0)$ and $\epsilon\in(0,\epsilon_0)$, where
$\epsilonta$ is given by \epsilonqref{rateeps}.
\epsilonnd{cor}
\begin{proof}
Since
\begin{eqnarray*}
&&\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1\\
&&=\frac{e^{2\re[c_a z^{n+1}]}-1}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-2 \re[c_a F_a(a)]\\
&&+O(|c_a|^2|a|^2+\deltalta^2 |\log \deltalta|)=2\re[c_a (z^{n+1}-\alphaha_a a)]+O(|c_a|^2 |z|^{2n+2}+ |c_a| |a|^2+\deltalta^2 |\log \deltalta|)\\
&&=2\re[\alphaha_a c_a (\sigmama(z)-a)]+O(|c_a| |z|^{n+2}+ |c_a| |a|^2+\deltalta^2 |\log \deltalta|),
\epsilonnd{eqnarray*}
by Theorem \ref{estrr01550} we deduce that
\begin{eqnarray*}
R = |\sigmama'(z)|^2 e^{U_{\deltalta,a}}O\left(|c_a||\sigmama(z)-a|+|c_a||z|^{n+2}+ |c_a||a|^2+\deltalta^2|\log \deltalta|+\epsilonta +\epsilonta^2 \right) +\epsilon^2 |\sigmama'(z)|^4 e^{2U_{\deltalta,a}}(1+O(\epsilonta))+ O(\deltalta^2)
\epsilonnd{eqnarray*}
as $\deltalta \to 0$, where $\epsilonta$ is given in \epsilonqref{rateeps}. In view of the estimates $|z|=O(|\sigmama(z)|^{\frac{1}{n+1}})$ and $|\sigmama'(z)|^2=O(|\sigmama(z)|^{\frac{2n}{n+1}})$ near $0$, by setting $y=\sigmama(z)$ in $\sigmama^{-1}(B_\rho(0))$ we get that
\begin{eqnarray*}
\|R\|_*&=&O\left(\sup_{y \in B_\rho(0)} \frac{\deltalta^{2-\gammama}}{(\deltalta^2+|y-a|^2)^{1-\frac{\gammama}{2}}}\left[|c_a||y-a|+|c_a||y|^{\frac{n+2}{n+1}} +|c_a||a|^2+\deltalta^2 |\log \deltalta|+\epsilonta +\epsilonta^2 \right]\right)\\
&&+O\left( \sup_{y \in B_\rho(0)} \frac{\epsilonpsilon^2 \deltalta^{4-\gammama}|y|^{\frac{2n}{n+1}}}{(\deltalta^2+|y-a|^2)^{3-\frac{\gammama}{2}}} [1+O(\epsilonta)]\right)+O\left( \sup_{y \in B_\rho(0)} \frac{\deltalta^{2-\gammama} (\deltalta^2+|y-a|^2)^{1+\gammama/2}}{(|y|^{\frac{2n}{n+1}}+\deltalta^{\frac{2n}{n+1}})} \right) +O(\deltalta^{2-\gammama})\\
&=& O\left(\sup_{y \in B_{2\rho/\deltalta}(0)} \frac{1}{(1+|y|^2)^{1-\frac{\gammama}{2}}}\left[\deltalta |c_a||y|+\deltalta^{\frac{n+2}{n+1}} |c_a| |y|^{\frac{n+2}{n+1}}+|c_a||a|^{\frac{n+2}{n+1}}+\deltalta^2 |\log \deltalta|+\epsilonta +\epsilonta^2 \right]\right)\\
&&+O\left( \sup_{y \in B_{2\rho/\deltalta}(0)} \frac{\epsilonpsilon^2 \deltalta^{-2}(\deltalta^{\frac{2n}{n+1}}|y|^{\frac{2n}{n+1}}+|a|^{\frac{2n}{n+1}})}{(1+|y|^2)^{3-\frac{\gammama}{2}}}[1+O(\epsilonta)] \right)\\
&&+O\left( \sup_{y \in B_{\rho/\deltalta}(0)} \frac{\deltalta^{\frac{2}{n+1}-\gammama} (\deltalta^{2+\gammama}+|a|^{2+\gammama}+\deltalta^{2+\gammama} |y|^{2+\gammama})}{(|y|^{\frac{2n}{n+1}}+1)} \right)+O(\deltalta^{2-\gammama})\\
&=& O\left(\deltalta |c_a|+\deltalta^{2-\gammama} +\deltalta^{\frac{2}{n+1}-\gammama} |a|^{2+\gammama}+|c_a||a|^{\frac{n+2}{n+1}}+\epsilonta+\epsilonta^2 \right)
\epsilonnd{eqnarray*}
as claimed. \qed \epsilonnd{proof}
\section{The reduced equations}\lambdabdabel{reduced}
As we will discuss precisely in the next section, it will be
crucial to study the system $\int_{\Omega}ega R \, PZ_0=0$ and $\int_{\Omega}ega R\, PZ=0$, where
$PZ_0$ and $PZ$ are the unique solutions with zero average of
$\displaystyleelta PZ_{0} =\displaystyleelta Z_{0}-\frac{1}{|{\Omega}|}\int_{\Omega} \displaystyleelta Z_0$
and $\displaystyleelta PZ =\displaystyleelta Z-\frac{1}{|{\Omega}|}\int_{\Omega} \displaystyleelta Z$ in
${\Omega}ega$. Here, the functions $Z_0$ and $Z$ are defined as
follows:
$$Z_0(z)=\frac{\deltalta^2-|\sigmama(z)-a|^2}{\deltalta^2+|\sigmama(z)-a|^2}\quad\text{and}\quad
Z(z)= \frac{\deltalta(\sigmama(z)-a)}{\deltalta^2+|\sigmama(z)-a|^2},$$
and are (not doubly-periodic) solutions of $-\displaystyleelta \phi =|\sigmama'(z)|^2 e^{U_{\deltalta,a,\sigmama}} \phi$ in ${\Omega}ega$. Through the changes of variable $y=\sigmama(z)$ and $y \to \frac{y-a}{\deltalta}$ notice that
\begin{eqnarray}
\int_{\Omega}ega \displaystyleelta Z_0&=&-\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2e^{U_{\deltalta,a,\sigmama}} Z_0+O(\deltalta^2) =-8(n+1) \deltalta^2 \int_{B_\rho(0)}\frac{\deltalta^2-|y-a|^2}{(\deltalta^2+|y-a|^2)^3} +O(\deltalta^2)\nonumber \\
&=&-8(n+1) \int_{B_{\rho/\deltalta}(0)}\frac{1-|y|^2}{(1+|y|^2)^3} +O(\deltalta^2)=O(\deltalta^2) \lambdabdabel{deltaZ0}
\epsilonnd{eqnarray}
and
\begin{eqnarray}
\int_{\Omega}ega \displaystyleelta Z&=&-\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\deltalta,a,\sigmama}} Z+O(\deltalta^3) =-8(n+1) \deltalta^3 \int_{B_\rho(0)}\frac{y-a}{(\deltalta^2+|y-a|^2)^3} +O(\deltalta^3) \nonumber \\
&=&-8(n+1) \int_{B_{\rho/\deltalta}(0)}\frac{y}{(1+|y|^2)^3} +O(\deltalta^3)=O(\deltalta^3) \lambdabdabel{deltaZ}
\epsilonnd{eqnarray}
in view of
$$\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^3}=0\,,\quad \int_{\mathbb{R}^2} \frac{y}{(1+|y|^2)^3}=0.$$
By \epsilonqref{deltaZ0}-\epsilonqref{deltaZ} the following expansions, useful in the sequel, are easily deduced:
\begin{equation}\lambdabdabel{pzij}
PZ_{0}=Z_{0} - {1\over|{\Omega}|}\int_{\Omega} Z_{0}+O(\delta^2)\:,\qquad
PZ=Z-{1\over|{\Omega}|}\int_{\Omega} Z+O(\delta)
\epsilonnd{equation}
in $C(\overline{{\Omega}ega})$, uniformly in $|a|< \rho$ and $\sigmama \in \mathcal{B}_r$.
\noindent Notice that up to now there is no relation between $a$ and $\deltalta$. However, as we will show in Remarks \ref{remark1} and \ref{remark2}, the range $|a|>>\deltalta$ is not compatible with solving simultaneously $\int_{\Omega}ega R \, PZ_0=0$ and $\int_{\Omega}ega R\, PZ=0$. Hence, we shall restrict our attention to the case $a=O(\deltalta)$ in next sections, so that, we can assume that $\epsilonta=\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}}$ in \epsilonqref{rateeps} and $E_{a,\deltalta}=\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4}$ in \epsilonqref{Eadelta}. We have that
\begin{prop} \lambdabdabel{reducedequations}
Assume $|a|\leq C_0 \deltalta$ for some $C_0>0$. The following
expansions do hold as $\delta,\epsilonta\to 0$
\begin{eqnarray} \lambdabdabel{solve1b}
\int_{\Omega}ega R \, PZ_0&=&- 16 \pi (n+1) |\alphaha_a|^2 |c_a|^2 \deltalta^2 \log \frac{1}{\deltalta}-8\pi \deltalta^2 D_a +64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonta \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \nonumber \\
&&+o(\deltalta^2+\epsilonta)+O(\delta^2 |c_a|+|a|^{\frac{1}{n+1}}\deltalta^2 |\log \deltalta|+\epsilonta^2),
\epsilonnd{eqnarray}
and
\begin{eqnarray} \lambdabdabel{solve2b}
\int_{\Omega}ega R \, PZ = 4 \pi (n+1) \deltalta \overline{\alphaha_a c_a}
-64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonta \int_{\mathbb{R}^2} {|y+\frac{a}{\deltalta}|^{2n\over n+1} y
\over(1+|y|^2)^5}+o(\deltalta|c_a|+\deltalta|a|+\epsilonta+\deltalta^2)+O(\epsilonta^2),
\epsilonnd{eqnarray}
where $\epsilonta=\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}}$ and $c_a=c_{a,\sigmama_a}$, $\alphaha_a$, $D_a$ are given by \epsilonqref{ca}, \epsilonqref{alpha0}, \epsilonqref{Da}, respectively.
\epsilonnd{prop}
\begin{proof} Through the changes of variable $y=q(z)$ in $\sigmama^{-1}(B_\rho(0))$, $ y \to y^{n+1}$ and $y \to \frac{y-a}{\deltalta}$ we get that
\begin{eqnarray}
&&\int_{\Omega}ega \frac{\deltalta^{\gammama} (|\sigmama'(z)|^2+\deltalta^{\frac{2n}{n+1}})}{(\deltalta^2+|\sigmama(z)-a|^2)^{1+\frac{\gammama}{2}}}=\int_{\sigmama^{-1}(B_\rho(0))} \frac{\deltalta^{\gammama} (|\sigmama'(z)|^2+\deltalta^{\frac{2n}{n+1}})}{(\deltalta^2+|\sigmama(z)-a|^2)^{1+\frac{\gammama}{2}}}+O(\deltalta^\gammama) \lambdabdabel{1458} \\
&&=O\left(\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{\deltalta^{\gammama} (|y|^{2n}+\deltalta^{\frac{2n}{n+1}})}{(\deltalta^2+|y^{n+1}-a|^2)^{1+\frac{\gammama}{2}}}\right)+O(\deltalta^\gammama)=O\left(\int_{B_\rho(0)} \frac{\deltalta^{\gammama} (1+\deltalta^{\frac{2n}{n+1}} |y|^{-\frac{2n}{n+1}})}{(\deltalta^2+|y-a|^2)^{1+\frac{\gammama}{2}}}\right)+O(\deltalta^\gammama) \nonumber \\
&&=O\left(\int_{B_{\rho/\deltalta}(0)} \frac{1+ |y+\frac{a}{\deltalta}|^{-\frac{2n}{n+1}}}{(1+|y|^2)^{1+\frac{\gammama}{2}}}\right)+O(\deltalta^\gammama)=O(1) \nonumber
\epsilonnd{eqnarray}
in view of
$$\int_{B_{\rho/\deltalta}(0)} \frac{|y+\frac{a}{\deltalta}|^{-\frac{2n}{n+1}}}{(1+|y|^2)^{1+\frac{\gammama}{2}}}
\leq \int_{B_1(0)} |y|^{-\frac{2n}{n+1}}+\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^{1+\frac{\gammama}{2}}}<+\infty.$$
Hence, by Corollary \ref{estrr0cor} we get that
\begin{eqnarray} \lambdabdabel{int|R|}
\int_{\Omega}ega |R|= O\left(\deltalta |c_a|+\deltalta^{2-\gammama} +\deltalta^{\frac{2}{n+1}-\gammama} |a|^{2+\gammama}+|c_a||a|^{\frac{n+2}{n+1}}+\epsilonta+\epsilonta^2 \right).
\epsilonnd{eqnarray}
By \epsilonqref{pzij} and \epsilonqref{int|R|} we deduce that
\begin{eqnarray} \lambdabdabel{zerotermbisb}
\int_{\Omega}ega R \, PZ_0=\int_{\Omega}ega R (Z_0+1)+o(\deltalta^2)+O(\epsilonta \deltalta^2+\epsilonta^2 \deltalta^2)
\epsilonnd{eqnarray}
in view of $\int_{\Omega} R=0$. Since by H\"older inequality
\begin{eqnarray*}
\int_{\Omega}ega |Z_0+1|&=& \int_{\sigmama^{-1}(B_\rho(0))} \frac{2
\delta^2}{\delta^2+|\sigmama(z)-a|^2}+O(\delta^2)=
O\bigg(\int_{B_\rho(0)} |y|^{-{2n \over n+1}} \frac{\delta^2}{\delta^2+|y-a|^2}\bigg)+O(\delta^2)\\
&=& O\bigg(\delta^{1\over n+1} \int_{B_\rho(0)}
\frac{1}{|y|^{2n \over n+1} |y-a|^{1 \over
n+1}}\bigg)+O(\delta^2)\\
&=&O\bigg( \delta^{1\over n+1} \bigg[\int_{B_\rho(0)}
\frac{1}{|y|^{2n+1 \over n+1}} \bigg]^{2n \over 2n+1} \bigg[\int_{B_\rho(0)} \frac{1}{|y-a|^{2n+1 \over
n+1}} \bigg]^{1 \over 2n+1} \bigg)+O(\deltalta^2)=O(\delta^{1\over n+1}),
\epsilonnd{eqnarray*}
by (\ref{imp}) we have that
\begin{eqnarray} \lambdabdabel{firsttermbis}
&&\hspace{-0.5cm} \int_{\Omega} (Z_0+1)\left[\lambdabdap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega}
e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] \\
&&\hspace{-0.5cm} =\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}(Z_0+1)
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1\right] \nonumber \\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\deltalta^2)\nonumber\\
&&\hspace{-0.5cm} = \int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{16(n+1)^2 \deltalta^4 |y|^{2n}}{(\deltalta^2+|y^{n+1}-a|^2)^3}
\left[\frac{e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1 \right] \nonumber \\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\deltalta^2). \nonumber
\epsilonnd{eqnarray}
We have that the expansion \epsilonqref{keyexp2} still holds in this context, where the notation $\dots$ stands for terms
that give no contribution in the integral term of \epsilonqref{firsttermbis} in view of the analogous of formula \epsilonqref{symmetryint}:
\begin{eqnarray} \lambdabdabel{symmetryintbis}
\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{|y|^m y^k}{(\deltalta^2+|y^{n+1}-a|^2)^3}=0
\epsilonnd{eqnarray}
for all $m\geq 0$ and integer $k \notin (n+1)\mathbb{N}$. Hence, through the changes of variables $y \to y^{n+1}$ and $y \to \frac{y-a}{\deltalta}$, by the symmetries we have that
\begin{eqnarray}
&&\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{16(n+1)^2 \deltalta^4 |y|^{2n}}{(\deltalta^2+|y^{n+1}-a|^2)^3}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}= \int_{B_\rho(0)} \frac{16 (n+1) \deltalta^4}{(\deltalta^2+|y-a|^2)^3}
\re[1+2 c_a F_a(y)+|c_a|^2 G_a(y)] \nonumber \\
&&=\int_{B_{\rho}(a)} \frac{16 (n+1) \deltalta^4}{(\deltalta^2+|y-a|^2)^3}
\left[1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4} |c_a|^2 \displaystyleelta \re G_a(a) |y-a|^2 +O(|y-a|^{\frac{2(n+2)}{n+1}}) \right]\nonumber\\
&&+O(\deltalta^4)=8 \pi (n+1) \left[1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4}|c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2\right] +O(\deltalta^{\frac{2(n+2)}{n+1}}) \lambdabdabel{1156}
\epsilonnd{eqnarray}
in view of \epsilonqref{expF}, \epsilonqref{expG} and
$$\int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^3}=\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^3}dy=\frac{\pi}{2},$$
where $F_a$ and $G_a$ are given by \epsilonqref{FG}. By \epsilonqref{1156} we can re-write \epsilonqref{firsttermbis} as
\begin{eqnarray}
&&\hspace{-0.5cm}\int_{\Omega} (Z_0+1)\left[\lambdabdap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega}
e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] \nonumber \\
&&\hspace{-0.5cm}= 8\pi (n+1) \left[ \frac{1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4}|c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}
-1 \right] +O(\delta^2 |c_a|) \nonumber\\
&&\hspace{-0.5cm}+o(\deltalta^2)= -16 \pi (n+1) |\alphaha_a|^2 |c_a|^2 \deltalta^2 \log \frac{1}{\deltalta}-8\pi \deltalta^2 D_a +O(\delta^2 |c_a|+|a|^{\frac{1}{n+1}}\deltalta^2 |\log \deltalta|)+o(\deltalta^2)
\lambdabdabel{1046pr}
\epsilonnd{eqnarray}
in view of $\displaystyleelta \re G_a(a)=4|\alphaha_a|^2+O(|a|^{\frac{1}{n+1}})$. By (\ref{eps4}) we also deduce that
\begin{eqnarray}
&&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega e^{2u_0+2W}})^2} (Z_0+1)
\left({e^{u_0+W}\over \int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over
\int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&&=\,\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}(Z_0+1)
\left[\frac{8(n+1)^2\epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}} \deltalta^{\frac{2}{n+1}}}
E_{a,\deltalta}-\epsilon^2 |\sigmama'(z)|^2 e^{U_{\deltalta,a}} \right]\left[1+O(|c_a||z|^{n+1}+\epsilonta)+o(1) \right]\nonumber \\
&&+O(\deltalta^4 \epsilonta)=\frac{128 (n+1)^3\epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}} \deltalta^{\frac{2}{n+1}}}
E_{a,\deltalta} \int_{B_\rho(0)}{\delta^4\over(\delta^2+|y-a|^2)^3} \left[1+O(|c_a||y|+\epsilonta)+o(1) \right] \nonumber \\
&&-128(n+1)^3\epsilon^2 |\alphaha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y|^{2n\over
n+1}}{(\delta^2+|y-a|^2)^5}\left[1+O(|y|^{1\over n+1}+\epsilonta)+o(1) \right] +O(\deltalta^4 \epsilonta)\nonumber \\
&& =64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}} E_{a,\deltalta}-128(n+1)^3\epsilon^2 |\alphaha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O(|y|^{1\over n+1}+\epsilonta)+o(1) \right]\nonumber \\
&& +o(\epsilonta+\deltalta^2)+O(\epsilonta^2) \nonumber
\epsilonnd{eqnarray}
in view of \epsilonqref{1852}. Since
$$\deltalta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O(|y|^{1\over n+1}+\epsilonta)+o(1) \right] =\int_{\mathbb{R}^2}\frac{|y+\frac{a}{\deltalta}|^{2n\over n+1}}{(1+|y|^2)^5}+o(1)+O(\epsilonta)$$
when $|a|=O(\deltalta)$, we then have that
\begin{eqnarray}
&&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega e^{2u_0+2W}})^2} (Z_0+1)
\left({e^{u_0+W}\over \int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over
\int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&& =64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonta \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5}+o(\epsilonta+\deltalta^2)+O(\epsilonta^2) \lambdabdabel{2024}
\epsilonnd{eqnarray}
in view of \epsilonqref{Eadelta}. Inserting \epsilonqref{1046pr} and \epsilonqref{2024} into \epsilonqref{zerotermbisb}, we get the validity of \epsilonqref{solve1b}.
\begin{rem} \lambdabdabel{remark1}
Notice that in the range $|a|>>\deltalta$ we find that
$$\deltalta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O\bigg(|y|^{1\over n+1}+\epsilonta\Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}}\bigg)+o(1) \right] =\frac{\pi}{4} \Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}}\bigg[1+o(1)+O\Big(\epsilonta\Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}}\Big)\bigg]$$
in view of the inequality $|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+O(|a|^{\frac{n-1}{n+1}}|y|+|y|^{\frac{2n}{n+1}})$, so that the main order of $\int_{\Omega}ega R PZ_0$ in this range is essentially given by
\begin{eqnarray*}
- 16 \pi (n+1) |\alphaha_a|^2 |c_a|^2 \deltalta^2 \log \frac{1}{\deltalta}-8\pi \deltalta^2 D_a - \frac{32 \pi}{3} (n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonta \Big(\frac{|a|}{\deltalta}\Big)^{\frac{2n}{n+1}}.\epsilonnd{eqnarray*}
\epsilonnd{rem}
\noindent By \epsilonqref{pzij} and \epsilonqref{int|R|} we deduce that
\begin{eqnarray} \lambdabdabel{zerotermbisc}
\int_{\Omega}ega R \, PZ =\int_{\Omega}ega R Z+o(\deltalta |c_a|+\deltalta |a|+\epsilonta+\deltalta^2)+O(\epsilonta^2 \deltalta)
\epsilonnd{eqnarray}
in view of $\int_{\Omega} R=0$. Since as before
\begin{eqnarray*}
\int_{\Omega}ega |Z|&=& \int_{\sigmama^{-1}(B_\rho(0))} \frac{
\delta|\sigmama(z)-a|}{\delta^2+|\sigmama(z)-a|^2}+O(\delta)=
O\bigg(\int_{B_\rho(0)} |y|^{-{2n \over n+1}} \frac{\delta |y-a|}{\delta^2+|y-a|^2}\bigg)+O(\delta)\\
&=& O\bigg(\delta^{1\over n+1} \int_{B_\rho(0)}
\frac{1}{|y|^{2n \over n+1} |y-a|^{1 \over
n+1}}\bigg)+O(\delta)=O(\delta^{1\over n+1}),
\epsilonnd{eqnarray*}
by (\ref{imp}) we have that
\begin{eqnarray}
&&\hspace{-0.5cm} \int_{\Omega} Z\,\left[\lambdabdap W+4\pi N\left({ e^{u_0+W} \over \int_{\Omega}
e^{u_0+W}}-{1\over |{\Omega}|}\right)\right] \lambdabdabel{1011} \\
&&\hspace{-0.5cm} =\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}Z
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}-1\right]\nonumber\\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\deltalta^2) \nonumber\\
&&\hspace{-0.5cm}=\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \deltalta^3 |y|^{2n}(y^{n+1}-a)}{(\deltalta^2+|y^{n+1}-a|^2)^3} \frac{e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}\nonumber \\
&&\hspace{-0.5cm} -\int_{B_\rho(0)} \frac{8(n+1) \deltalta^3 (y-a)}{(\deltalta^2+|y-a|^2)^3}+O(\delta^2 |c_a|)+o(\deltalta^2)\nonumber \\
&&\hspace{-0.5cm} =\frac{\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \deltalta^3 |y|^{2n}(y^{n+1}-a)}{(\deltalta^2+|y^{n+1}-a|^2)^3} e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \displaystyleelta \re G_a(a) \deltalta^2 \log \frac{1}{\deltalta}+\frac{\deltalta^2}{n+1} D_a}+O(\delta^2 |c_a|)+o(\deltalta^2)\nonumber
\epsilonnd{eqnarray}
in view of
$$\int_{B_\rho(a)} \frac{8(n+1) \deltalta^3 (y-a)}{(\deltalta^2+|y-a|^2)^3}=0.$$
Since expansion \epsilonqref{keyexp2} is still valid in view of \epsilonqref{symmetryintbis}, through the changes of variables $y \to y^{n+1}$ and $y \to \frac{y-a}{\deltalta}$, by the symmetries we have that
\begin{eqnarray} \lambdabdabel{1158}
&&\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \deltalta^3 |y|^{2n}(y^{n+1}-a)}{(\deltalta^2+|y^{n+1}-a|^2)^3} e^{2\re[c_a (q^{-1}(y))^{n+1}]}\nonumber \\
&&=\int_{B_\rho(0)} \frac{8(n+1) \deltalta^3(y-a)}{(\deltalta^2+|y-a|^2)^3} \re[1+2 c_a F_a(y)+|c_a|^2 G_a(y)] \nonumber \\
&&=\int_{B_{\rho}(a)} \frac{8 (n+1) \deltalta^3}{(\deltalta^2+|y-a|^2)^3}
\left[\overline{c_a F_a'(a)} |y-a|^2+\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a(a) |y-a|^2 +O(|c_a|^2 |y-a|^3) \right]+O(\deltalta^3)\nonumber\\
&&=4 \pi (n+1) \deltalta \left[\overline{c_a F_a'(a)} +\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a (a) \right] +O(\delta^2 |c_a|^2+\deltalta^3)
\epsilonnd{eqnarray}
in view of \epsilonqref{expF}, \epsilonqref{expG} and $\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^3}dy=\frac{\pi}{2}$, where $F_a$ and $G_a$ are given by \epsilonqref{FG}. By \epsilonqref{1158} we can re-write \epsilonqref{1011} as
\begin{eqnarray}
&&\int_{\Omega} Z \left[\lambdabdap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega}
e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] =4 \pi (n+1) \deltalta \left[\overline{c_a F_a'(a)} +\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a(a) \right] \nonumber\\
&&+o(\delta |c_a|+\deltalta^2)=4 \pi (n+1) \deltalta \overline{\alphaha_a c_a} +o(\delta |c_a|+\deltalta^2) \lambdabdabel{firstterm}
\epsilonnd{eqnarray}
in view of $F_a'(a)=\alphaha_a+O(|a|)$ and $\frac{1}{2} (\partial_1+i\partial_2) \re G_a(a)=O(|a|)$. As far as the second term of $R$, by (\ref{eps4}) we have that
\begin{eqnarray}
&&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2W}}{(\int_{\Omega}ega e^{u_0+W}+\sqrt{(\int_{\Omega}ega e^{u_0+W})^2-16\pi
N\epsilonpsilon^2\int_{\Omega}ega e^{2u_0+2W}})^2} Z \left({e^{u_0+W}\over
\int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over
\int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&&=\,\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}Z
\left[\frac{8(n+1)^2\epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}} \deltalta^{\frac{2}{n+1}}}
E_{a,\deltalta}-\epsilon^2 |\sigmama'(z)|^2 e^{U_{\deltalta,a}} \right] \left[1+O(|c_a||z|^{n+1}+\epsilonta)+o(1) \right] \nonumber\\
&&+O(\delta^3\epsilonta) = \frac{64(n+1)^3 \epsilonpsilon^2}{\pi |\alphaha_a|^{\frac{2}{n+1}} \deltalta^{\frac{2}{n+1}}}
E_{a,\deltalta} \int_{B_\rho(0)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy \left[1+O(|c_a||y|+\epsilonta)+o(1) \right] \nonumber \\
&&-64 (n+1)^3 \epsilon^2 |\alphaha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)} \frac{\deltalta^5 |y|^{2n \over n+1}(y-a)}{(\delta^2+|y-a|^2)^5}\left[1+O(|y|^{\frac{1}{n+1}}+\epsilonta)+o(1) \right]+O(\delta^3\epsilonta) \nonumber \\
&&=\,-64(n+1)^3 |\alphaha_a|^{-\frac{2}{n+1}} \epsilonta \int_{\mathbb{R}^2} {|y+\frac{a}{\deltalta}|^{2n\over n+1} y
\over(1+|y|^2)^5}+o(\epsilonta)+O(\epsilonta^2)
\lambdabdabel{secondterm}
\epsilonnd{eqnarray}
in view of \epsilonqref{1852} and
$$\int_{B_\rho(0)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy=\int_{B_\rho(a)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy+O(\delta^3)=O(\delta^3).$$
Inserting (\ref{firstterm}) and (\ref{secondterm}) into (\ref{zerotermbisc}), we get the validity of (\ref{solve2b}).\qed
\epsilonnd{proof}
\begin{rem}\lambdabdabel{remark2} Since for $|a|>>\deltalta$ and $n>1$
\begin{eqnarray*}
\deltalta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^5 |y|^{2n\over n+1}(y-a)}{(\delta^2+|y-a|^2)^5}=\deltalta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^5 |y+a|^{2n\over n+1}y}{(\delta^2+|y|^2)^5}+o(1)= \frac{\pi n}{12(n+1)} \Big(\frac{|a|}{\deltalta}\Big)^{-\frac{2}{n+1}} \frac{a}{\deltalta} [1+o(1)]
\epsilonnd{eqnarray*}
in view of
$$\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^5}=\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^4}-\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^5}= \frac{\pi}{12}$$
and the inequality
$$|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+\frac{n}{n+1} |a|^{-\frac{2}{n+1}}(a\overline{y}+\overline{a}y)+O(|a|^{-\frac{2}{n+1}}|y|^2+|y|^{\frac{2n}{n+1}}),$$
notice that the main order of $\int_{\Omega}ega R PZ$, in this range, is essentially given by
$$4 \pi (n+1) \deltalta \overline{\alphaha_a c_a}-\frac{16}{3} \pi n (n+1)^2 \epsilon^2 \deltalta^{-\frac{2}{n+1}} |\alphaha_a|^{-\frac{2}{n+1}} \Big(\frac{|a|}{\deltalta}\Big)^{-\frac{2}{n+1}} \frac{a}{\deltalta}.$$
Since $\alphaha_a$ is uniformly away from zero, the vanishing of $\int_{\Omega}ega R PZ$, which is equivalent to have $\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}} (\frac{|a|}{\deltalta})^{\frac{2n}{n+1}} \sim \overline{\alphaha_a c_a a}$, is generally not compatible in the range $|a|>>\deltalta$ with the vanishing of $\int_{\Omega}ega RPZ_0$ in view of Remark \ref{remark1}, which can take place only if $c_0=0$ (in which case $c_a \sim a$). Indeed, the vanishing of $\int_{\Omega}ega RPZ$ and $\int_{\Omega}ega RPZ_0$ in the range $|a|>>\deltalta$ implies the contradiction $|a|^2 \sim \deltalta^2$. This explains why we don't consider the case $|a|>>\deltalta$.
\epsilonnd{rem}
\section{Proof of the main results}\lambdabdabel{mainresults}
In the previous section, we have built up an approximating function $W=PU_{\delta,a,\sigmama_a}$. We will now look for solutions
$w$ of the form $w=W+\phi$, where $\phi$ is a small correcting
term. In terms of $\phi$, problem \epsilonqref{3} is equivalent to find
a doubly-periodic solution $\phi$ of
\begin{equation}\lambdabdabel{ephi}
L(\phi)=-[R+N(\phi)]\qquad\text{ in ${\Omega}$}
\epsilonnd{equation}
with $\int_{\Omega}ega \phi=0$. Recalling the notation $B(w)=16 \pi N
(\int_{\Omega} e^{2u_0+2w})(\int_{\Omega} e^{u_0+w})^{-2}$, the linear operator $L$
is given by
$$L(\phi) = \displaystyleelta \phi + \mathcal{K} \phi+\tildelde \gammama(\phi),$$
where
$$\mathcal{K}=4\pi N {e^{u_0+W}\over\int_{\Omega} e^{u_0+W}} +\frac{4 \pi N \epsilonpsilon^2 B(W)}{\left(1+\sqrt{1-\epsilonpsilon^2 B(W)}\right)^2} \left({e^{u_0+W}\over\int_{\Omega} e^{u_0+W}}- 2 \frac{e^{2u_0+2W}}{\int_{\Omega}ega e^{2u_0+2W}}\right) $$ and
\begin{equation*}
\begin{split}
\tildelde \gammama(\phi)&=-4\pi N {e^{u_0+W} \int_{\Omega}
e^{u_0+W} \phi \over(\int_{\Omega} e^{u_0+W})^2 }-\frac{4 \pi N \epsilonpsilon^2 B(W)}{\left(1+\sqrt{1-\epsilonpsilon^2 B(W)}\right)^2} {e^{u_0+W} \over (\int_{\Omega} e^{u_0+W})^2} \int_{\Omega} e^{u_0+W} \phi \\
&+\frac{8 \pi N \epsilonpsilon^2 B(W)}{\left(1+\sqrt{1-\epsilonpsilon^2
B(W)}\right)^2} \frac{
e^{2u_0+2W}}{(\int_{\Omega}ega e^{2u_0+2W})^2} \int_{\Omega}ega e^{2u_0+2W} \phi \\
&+4 \pi N \epsilonpsilon^2 \frac{DB(W)[\phi]}{(1+\sqrt{1-\epsilonpsilon^2
B(W)})^2\sqrt{1-\epsilonpsilon^2 B(W)}} \left(\frac{e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_{\Omega}ega e^{2u_0+2W}}\right)
\epsilonnd{split}
\epsilonnd{equation*}
with
$$DB(W)[\phi]= 2 B(W) \left(
{\int_{\Omega} e^{2u_0+2W} \phi \over \int_{\Omega} e^{2u_0+2W}}- {\int_{\Omega}
e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right).$$ The nonlinear term
$N(\phi)$, which is quadratic in $\phi$, is given by
\begin{eqnarray} \lambdabdabel{nlt}
&&N(\phi)=4\pi N\left[\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega
e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-{e^{u_0+W}\over\int_{\Omega}
e^{u_0+W}}\left(\phi-\frac{\int_{\Omega}
e^{u_0+W}\phi}{\int_{\Omega} e^{u_0+W}}\right)\right]\nonumber\\
&&+\left[\frac{4 \pi N \epsilonpsilon^2 B(W+\phi)}{(1+\sqrt{1-\epsilonpsilon^2 B(W+\phi)})^2}-\frac{4 \pi N \epsilonpsilon^2 B(W)}{(1+\sqrt{1-\epsilonpsilon^2 B(W)})^2}-\frac{4 \pi N \epsilonpsilon^2 DB(W)[\phi]}{(1+\sqrt{1-\epsilonpsilon^2 B(W)})^2\sqrt{1-\epsilonpsilon^2 B(W)}} \right]\tildemes \nonumber\\
&&\tildemes \left(\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega
e^{u_0+W+\phi}}-\frac{
e^{2(u_0+W+\phi)}}{\int_{\Omega}ega e^{2(u_0+W+\phi)}}\right)\nonumber \\
&&+\frac{4 \pi N \epsilonpsilon^2 B(W)}{\left(1+\sqrt{1-\epsilonpsilon^2
B(W)}\right)^2}\left[\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-{e^{u_0+W} \over\int_{\Omega}
e^{u_0+W}}\left(\phi-\frac{\int_{\Omega}
e^{u_0+W}\phi}{\int_{\Omega} e^{u_0+W}}\right) \right] \\
&&-\frac{4 \pi N \epsilonpsilon^2 B(W)}{\left(1+\sqrt{1-\epsilonpsilon^2
B(W)}\right)^2}\left[\frac{e^{2(u_0+W+\phi)}}{\int_{\Omega}ega
e^{2(u_0+W+\phi)}}-\frac{e^{2(u_0+W)}}{\int_{\Omega}ega e^{2(u_0+W)}}-2
\frac{
e^{2(u_0+W)}}{\int_{\Omega}ega e^{2(u_0+W)}}\left( \phi-\frac{\int_{\Omega}ega e^{2(u_0+W)} \phi}{\int_{\Omega}ega e^{2(u_0+W)}} \right)\right] \nonumber\\
&&+\frac{4 \pi N \epsilonpsilon^2 DB(W)[\phi]}{(1+\sqrt{1-\epsilonpsilon^2
B(W)})^2\sqrt{1-\epsilonpsilon^2 B(W)}}
\left(\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega
e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-\frac{
e^{2(u_0+W+\phi)}}{\int_{\Omega}ega e^{2(u_0+W+\phi)}}+\frac{
e^{2(u_0+W)}}{\int_{\Omega}ega e^{2(u_0+W)}}\right). \nonumber
\epsilonnd{eqnarray}
Notice that we can re-write $\tildelde \gammama (\phi)$ as
\begin{equation*}
\begin{split}
\tildelde \gammama(\phi)&=- \mathcal{K} {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega}
e^{u_0+W}} +\frac{8\pi N \epsilonpsilon^2 B(W)}{(1+\sqrt{1-\epsilonpsilon^2
B(W)})^2 \sqrt{1-\epsilonpsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W} }\right)
\left[{ e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}\right.\\
&\left.+(\sqrt{1-\epsilonpsilon^2 B(W)}-1){ e^{2(u_0+W)} \over \int_{\Omega} e^{2(u_0+W)}}\right]\\
&=\mathcal{K}\left[- {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega} e^{u_0+W}}
+\frac{\epsilonpsilon^2 B(W)}{(1+\sqrt{1-\epsilonpsilon^2
B(W)})\sqrt{1-\epsilonpsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right)
\right],
\epsilonnd{split}
\epsilonnd{equation*}
and $L$ as
\begin{equation}\lambdabdabel{ol}
L(\phi) = \displaystyleelta \phi + \mathcal{K} \left[ \phi+ \gammama(\phi)\right],
\epsilonnd{equation}
where
$$\gammama(\phi)=- {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega} e^{u_0+W}}
+\frac{\epsilonpsilon^2 B(W)}{(1+\sqrt{1-\epsilonpsilon^2
B(W)})\sqrt{1-\epsilonpsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right).$$
Let us observe that
$$\int_{\Omega} R=\int_{\Omega} L(\phi)=\int_{\Omega} N(\phi)=0.$$
\noindent Since the operator $L$ is not invertible, equation $L(\phi)=-R-N(\phi)$ is not generally solvable.
The linear theory we will develop in Appendix B states that
$L$ has a kernel which is almost generated by $PZ_0$, $PZ$ and
$\overline{PZ}$, yielding to
\begin{prop} \lambdabdabel{prop4.1}
Let $M_0>0$. There exists $\epsilonta_0>0$ small such that for any $0<\delta\leq \epsilonta_0$,
$|\log \deltalta| \epsilon^2 \leq \epsilonta_0 \delta^{2\over n+1}$, $|a|\leq M_0 \delta$ and $h\in
L^\infty({\Omega})$ with $\int_{\Omega} h=0$ there is a unique solution
$\phi$, $d_0\in{\mathbb{R}}$ and $d \in{\mathbb C}$ to
\begin{equation}\lambdabdabel{plcobis}
\left\{\begin{array}{ll}
L(\phi) =h + d_0 \displaystyleelta PZ_{0}+\re[d \lambdabdap PZ] &\text{in }{\Omega}\\
\int_{{\Omega}ega } \phi=\int_{{\Omega}ega } \phi \displaystyleelta PZ_0 = \int_{\Omega} \phi \displaystyleelta PZ=0.&
\epsilonnd{array} \right.
\epsilonnd{equation}
Moreover, there is a constant $C>0$ such that
$$\|\phi \|_\infty \le C\left(\log \frac 1\delta \right)\|h\|_*,\qquad
|d_{0}|+|d| \le C\|h\|_*.$$
\epsilonnd{prop}
\noindent As a consequence, in Appendix C we will show
\begin{prop}\lambdabdabel{nlp}
Let $M_0>0$. There exists $\epsilonta_0>0$ small such that for any
$0<\delta\leq\epsilonta_0$, $|\log \deltalta|^2 \epsilon^2\leq \epsilonta_0 \delta^{2\over n+1}$
and $|a|\leq M_0 \delta$ there is a unique solution
$\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$ and $d=d(\delta,a)\in{\mathbb C}$ to
\begin{equation}\lambdabdabel{linear}
\left\{\begin{array}{ll}
L(\phi) =-[R+N(\phi)] + d_0 \displaystyleelta PZ_{0}+\re[d \lambdabdap PZ] &\text{in }{\Omega}\\
\int_{{\Omega}ega } \phi=\int_{{\Omega}ega } \phi \displaystyleelta PZ_0= \int_{\Omega} \phi \displaystyleelta PZ=0.&
\epsilonnd{array} \right.
\epsilonnd{equation}
Moreover, the map $(\delta,a)\mapsto \phi(\delta,a)$ is $C^1$ with
\begin{equation}\lambdabdabel{estphi}
\|\phi\|_\infty\le C |\log \deltalta| \|R\|_*.
\epsilonnd{equation}
\epsilonnd{prop}
\noindent The function $W+\phi$ will be a true solution of equation (\ref{3}) once we adjust $\deltalta$ and $a$ to have
$d_0(\delta,a)=d(\delta,a)=0$. \noindent The crucial point is the following:
\begin{lem} \lambdabdabel{1039}
Let $\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$ and
$d=d(\delta,a)\in{\mathbb C}$ be the solution of \epsilonqref{linear} given by Proposition \ref{nlp}. There exists $\epsilonta_0>0$ such that if
$0<\delta \leq \epsilonta_0$, $|a| \leq \epsilonta_0$ and
\begin{equation} \lambdabdabel{solve}
\int_{\Omega}ega (L(\phi)+N(\phi)+R) PZ_0=0,\qquad \int_{\Omega}ega
(L(\phi)+N(\phi)+R) PZ=0
\epsilonnd{equation}
do hold, then $W+\phi$ is a solution
of \epsilonqref{3}, i.e. $d_0(\delta,a)=d(\delta,a)=0$.
\epsilonnd{lem}
\begin{proof} Since by (\ref{pzij}) and $\|Z_0\|_\infty+\|Z\|_\infty\leq 2$
there hold
\begin{eqnarray*}
\int_{\Omega} \lambdabdap PZ_0PZ_0&=& \int_{\Omega}\lambdabdap Z_0 PZ_0=-\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}Z_0(Z_0+1)+O(\delta^2)\\
&=&- 16 (n+1) \deltalta^4 \int_{B_\rho(0)}
\frac{\deltalta^2-|y-a|^2}{(\deltalta^2+|y-a|^2)^4} +O(\delta^2) =- \frac{8 \pi}{3} (n+1) +O(\delta^2)
\epsilonnd{eqnarray*}
and
\begin{eqnarray*}
\int_{\Omega} \lambdabdap PZPZ_0&=&\int_{\Omega}\lambdabdap Z PZ_0=-\int_{\sigmama^{-1}(B_\rho(0))}|\sigmama'(z)|^2 e^{U_{\delta,a}}Z(Z_0+1)+O(\delta^2)\\
&=& -\int_{B_\rho(0)}{16(n+1)\delta^5 (y-a)
\over(\delta^2+|y-a|^2)^4} +O(\delta^2)=-
\int_{B_\rho(0)}{16(n+1)\delta^5 y \over(\delta^2+|y|^2)^4}
+O(\delta^2)=O(\delta^2)
\epsilonnd{eqnarray*}
in view of \epsilonqref{deltaZ0}-\epsilonqref{deltaZ} and
$$\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4}dy=2 \int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^4}-\int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^3}=\frac{\pi}{6},$$
by (\ref{linear}) we rewrite the first of (\ref{solve}) as
\begin{eqnarray*}
0=d_0 \int_{\Omega}ega \lambdabdap PZ_0PZ_0+\int_{\Omega}\re[d \lambdabdap PZPZ_0]=-\frac{8}{3}\pi
(n+1) d_0+O(\deltalta^2 |d_0|+\deltalta^2 |d|).
\epsilonnd{eqnarray*}
Similarly, the second of (\ref{solve}) gives that
\begin{eqnarray*}
0&=& d_0 \int_{\Omega}ega \lambdabdap PZ_0 PZ+\int_{\Omega} {1\over 2}\left[d \lambdabdap
PZ+\bar d \lambdabdap\overline{PZ}\right]PZ=
-\int_{\sigmama^{-1}(B_\rho(0))} {1\over 2} |\sigmama'(z)|^2 e^{U_{\delta,a}} \left[d Z +\bar d\ \overline{Z} \right] Z\\
&&+ O(\deltalta^2 |d_0|+\deltalta |d|)= -4 (n+1) \bar
d\int_{\mathbb{R}^2} \frac{|y|^2 }{(1+|y|^2)^4}+ O(\deltalta^2
|d_0|+\deltalta |d|)
\epsilonnd{eqnarray*}
in view of $\int_{\Omega}ega \lambdabdap PZ_0 PZ=\int_{\Omega}ega \lambdabdap PZ PZ_0=O(\deltalta^2)$, \epsilonqref{deltaZ} and \epsilonqref{pzij}. Hence, (\ref{solve}) can be simply re-written as $d_0+O(\deltalta^2
|d_0|+\deltalta^2 |d|)=0$, $d+O(\deltalta^2 |d_0|+\deltalta |d|)=0$.
Summing up the two relations, we then obtain $|d_0|+|d|=\deltalta
O(|d_0|+|d|)$ which implies $d_0=d=0$.\qed
\epsilonnd{proof}
\begin{rem} \lambdabdabel{remark2bis} Since $\phi$ is sufficiently small, the system \epsilonqref{solve} will be a perturbation of the reduced equations $\int_{\Omega}ega R \, PZ_0=0$, $\int_{\Omega}ega R \, PZ=0$. The integral coefficient in \epsilonqref{solve1b} is negative for all $\frac{a}{\deltalta}$, as we will see in Appendix D. Since $\alphaha_a \to \alphaha_0 =\frac{\mathcal{H}(0)}{n+1}\not=0$ and $c_a \to c_0$ as $a \to 0$, we can always exclude the case $c_0 \not=0$. Indeed, in such a case the equation $\int_{\Omega}ega R \, PZ_0=0$ yields to $\epsilon^2\delta^{-{2\over n+1}} \sim \deltalta^2 |\log \deltalta|$ as $\deltalta \to 0$ by means of \epsilonqref{solve1b} (we are implicitly assuming $\epsilon^2\delta^{-{2\over n+1}} \to 0$, which is a natural range for solving the reduced equations through \epsilonqref{solve1b}-\epsilonqref{solve2b}). This is not compatible with $\int_{\Omega}ega R \, PZ=0$, which allows at most $\deltalta=O(\epsilon^2\delta^{-{2\over n+1}})$ by means of \epsilonqref{solve2b}.
\epsilonnd{rem}
\noindent The last ingredient is an expansion of the system (\ref{solve}) with the aid of Proposition \ref{reducedequations}:
\begin{prop} \lambdabdabel{1219}
Assume $c_0=0$ and $|a|\leq M_0\deltalta$ for some $M_0>0$. The
following expansions do hold as $\delta\to 0$ and $\epsilon\to0$
\begin{eqnarray}
\int_{\Omega}ega (L(\phi)+N(\phi)+R) PZ_0&=& -8\pi\deltalta^2 D_0 +64(n+1)^{\frac{3n+5}{n+1}} |\mathcal{H}(0)|^{-\frac{2}{n+1}} \epsilon^2 \delta^{-{2 \over n+1}}
\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\deltalta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \nonumber \\
&&+ o(\delta^2+\epsilon^2\delta^{-{1\over n+1}})+O(\epsilon^4 \delta^{-{2\over n+1}}|\log \deltalta|^2+\epsilon^8 \delta^{-{4\over n+1}}|\log \deltalta|^2) \lambdabdabel{solve1}
\epsilonnd{eqnarray}
and
\begin{eqnarray}
\int_{\Omega}ega (R+L(\phi)+N(\phi)) PZ &=& 4 \pi \deltalta (\bar \Upsilon a+ \bar \Gamma \bar a) -64(n+1)^{\frac{3n+5}{n+1}} |\mathcal{H}(0)|^{-\frac{2}{n+1}} \epsilon^2 \delta^{-{2\over n+1}}
\int_{\mathbb{R}^2} {|y+\frac{a}{\deltalta}|^{2n\over n+1} y
\over(1+|y|^2)^5} \nonumber \\
&&+o(\delta^2+\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}})+O(\epsilon^4 \delta^{-{2\over n+1}}|\log \deltalta|^2+\epsilon^8 \delta^{-{4\over n+1}}|\log \deltalta|^2), \lambdabdabel{solve2}
\epsilonnd{eqnarray}
where $D_0$ and $\Gamma$, $\Upsilon$ are defined in \epsilonqref{ggg} and
Lemma \ref{derivca}, respectively.
\epsilonnd{prop}
\begin{proof}
First, note that from the assumptions and \epsilonqref{ere}, we find
that $\|R\|_*=O(\delta^{2-\gammama}+\epsilonta +\epsilonta^2)$, where $\epsilonta=\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}}$. Hence,
since $|\gammama(\phi)|=O((1+\epsilonta)\|\phi\|_\infty)$ in view of \epsilonqref{BW}, by (\ref{estphi}), (\ref{diesis}), (\ref{diesisdiesis})
and (\ref{star}) we have that
\begin{eqnarray} \lambdabdabel{zerotermbis}
\int_{\Omega}ega (R+L(\phi)+N(\phi)) PZ_0&=& \int_{\Omega}ega R PZ_0+O\left( (1+\epsilonta) \Big\|\tilde
L\Big(PZ_0+\frac{1}{|{\Omega}ega|}\int_{\Omega}ega
Z_0\Big)\Big\|_*\|\phi\|_\infty+\|\phi\|_\infty^2\right)\\
&=& \int_{\Omega}ega R P Z_0+
o(\delta^2+\epsilonta)+O(\epsilonta^2+\epsilonta^4) \nonumber
\epsilonnd{eqnarray}
and
\begin{eqnarray} \lambdabdabel{zeroterm}
\int_{\Omega}ega (R+L(\phi)+N(\phi)) PZ&=&\int_{\Omega}ega RPZ+O\left( (1+\epsilonta) \Big\|\tilde
L\Big(PZ+\frac{1}{|{\Omega}ega|}\int_{\Omega}ega
Z\Big)\Big\|_*\|\phi\|_\infty+\|\phi\|_\infty^2\right)\\
&=& \int_{\Omega}ega R PZ+ o(\delta^2+\epsilonta)+O(\epsilonta^2+ \epsilonta^4)\nonumber
\epsilonnd{eqnarray}
in view of $PZ_0=O(1)$ and $PZ=O(1)$, where $\tilde L(\phi)=\lambdabdap
\phi+\mathcal{K}\phi$. Since by Lemma \ref{derivca} $\mathcal{H}(0)
c_a=\Gamma a+\Upsilon \bar a+o(|a|)$ as $a \to 0$ in view of
$c_0=0$, the desired expansions \epsilonqref{solve1}-\epsilonqref{solve2}
follow by a combination of \epsilonqref{solve1b}-\epsilonqref{solve2b} and
\epsilonqref{zerotermbis}-\epsilonqref{zeroterm}. We have used that $\alphaha_a
\to \alphaha_0=\frac{\mathcal{H}(0)}{n+1}$ as $a \to 0$ in view of
\epsilonqref{0942}, where $\alphaha_a$ is given by (\ref{alpha0}), and $D_a
\to D_0$ as $a \to 0$, where $D_a$ is given by \epsilonqref{Da}.\qed
\epsilonnd{proof}
\noindent Thanks to \epsilonqref{solve1}-\epsilonqref{solve2}, the aim is to find $(\delta(\epsilon),a(\epsilon))$ so that \epsilonqref{solve} does hold. To simplify the notations, we denote
$$\varphi_0(\delta,a,\epsilonpsilon)=\int_{\Omega}ega (L(\phi)+N(\phi)+R)
PZ_0 \qquad \varphi(\delta,a,\epsilonpsilon)=\overline{ \int_{\Omega}ega
(L(\phi)+N(\phi)+R) PZ},$$ and \epsilonqref{solve} reduces to find a solution of
\begin{equation}\lambdabdabel{solve3}
\varphi_0(\delta(\epsilon),a(\epsilon),\epsilonpsilon)=\varphi(\delta(\epsilon),a(\epsilon),\epsilonpsilon)=0
\epsilonnd{equation}
for $\epsilon$ small. We are now ready to prove our first main result, which clearly implies the validity of Theorem \ref{mainbb} with $m=1$.
\begin{thm} \lambdabdabel{main} Let $\mathcal{H}_0=\frac{\mathcal{H}}{z^{n+2}}$, where $\mathcal{H}$ is given in \epsilonqref{definitionH}, be a meromorphic function in ${\Omega}ega$ with $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi(n+1)G(z,0)}$ (which exists in view of \epsilonqref{balance} and is unique up to rotations), and $\sigmama_0(z)=-(\int^z \mathcal{H}_0 (w) dw)^{-1}$. Assume that
\begin{equation}\lambdabdabel{pc}
\frac{d^{n+1} \mathcal{H}}{dz^{n+1}}(0)=0
\epsilonnd{equation}
and for some small $\rho>0$
\begin{equation} \lambdabdabel{D0}
D_0:=\frac{1}{\pi}\left[\int_{{\Omega}ega \setminus \sigmama_0^{-1} (B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)} -
\int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{n+1}{|y|^4}\right]<0.
\epsilonnd{equation}
If the ``non-degeneracy condition"
\begin{equation} \lambdabdabel{nondegenracy}
|\Gamma| \not= \Big|\Upsilon+{ n(2n+3) \over n+1} D_0\Big|
\epsilonnd{equation}
does hold, where $\Gamma$ and $\Upsilon$ are given in Lemma \ref{derivca}, for $\epsilonpsilon>0$ small there exist $a (\epsilonpsilon)$,
$\deltalta(\epsilonpsilon)>0$ small so that $w_\epsilonpsilon=PU_{\deltalta(\epsilonpsilon),a(\epsilonpsilon), \sigmama_{a(\epsilonpsilon)}}+\phi(\deltalta(\epsilonpsilon),a(\epsilonpsilon))$ does solve \epsilonqref{3} with
\begin{eqnarray*} &&4\pi N \frac{e^{u_0+w_\epsilon}}{\int_{\Omega}ega e^{u_0+w_\epsilon}}+
\frac{64 \pi^2N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2w_\epsilon}}{(\int_{\Omega}ega e^{u_0+w_\epsilon}+\sqrt{(\int_{\Omega}ega
e^{u_0+w_\epsilon})^2-16\pi N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2w_\epsilon}})^2}\left(\frac{e^{u_0+w_\epsilon}}{\int_{\Omega}ega
e^{u_0+w_\epsilon}} -\frac{e^{2u_0+2w_\epsilon}}{\int_{\Omega}ega
e^{2u_0+2w_\epsilon}}\right)\\
&&\hspace{2cm} \rightharpoonup 8\pi(n+1) \deltalta_0
\epsilonnd{eqnarray*}
in the sense of measures as $\epsilonpsilon \to 0$.
\epsilonnd{thm}
\begin{rem} \lambdabdabel{0923} For simplicity, we are considering the case $p=0$ in Theorem \ref{main}, which however is still true for $p \not=0$ by simply replacing in the statement $\mathcal{H}$, $\mathcal{H}_0$ and corresponding quantities with $\mathcal{H}^p$, $\mathcal{H}_0^p$ and corresponding quantities at $p$, where the latter have been defined in Remark \ref{1149}.
\epsilonnd{rem}
\begin{proof} Since the equation $\varphi_0(\delta,a,\epsilonpsilon)=0$ naturally requires $\deltalta^2 \sim \epsilon^2 \deltalta^{-\frac{2}{n+1}}$ in view of \epsilonqref{solve1}, we make the following change of variables: $\deltalta=[\frac{(n+1)\epsilon^{n+1} }{|\mathcal{H}(0)|}]^{\frac{1}{n+2}} \mu$ and $\zeta=\frac{a}{\deltalta}$. The system \epsilonqref{solve3} is equivalent to find zeroes of
$$\Gamma_\epsilonpsilon(\mu,\zeta):=\left[ \frac{(n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|} \right] ^{-\frac{2}{n+2}} \left(- \frac{1}{8} \varphi_0, \frac{1}{4\pi \mu^2 } \varphi\right)
\left(\bigg[\frac{ (n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|}\bigg]^{\frac{1}{n+2}} \mu , \bigg[\frac{ (n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|}\bigg]^{\frac{1}{n+2}} \mu \zeta,\epsilonpsilon\right),$$
which has the expansion $\Gamma_\epsilonpsilon(\mu,\zeta)=\Gamma_0(\mu, \zeta)+o(1)$ as $\epsilon \to 0^+$, uniformly for $\mu$ in compact subsets of
$(0,+\infty)$, in view of (\ref{solve1})-(\ref{solve2}), where the map $\Gamma_0: \mathbb{R} \tildemes \mathbb{C} \to
\mathbb{R} \tildemes \mathbb{C}$ is defined as
$$\Gamma_0(\mu,\zeta)= \left(\pi D_0 \mu^2-\frac{8 (n+1)^3 }{\mu^{{2\over n+1}}} \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}, \Gamma \zeta+\Upsilon \bar \zeta-{16 (n+1)^3 \over \pi \mu^{{2(n+2)\over n+1}}} \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} \bar y \over(1+|y|^2)^5} \right).$$
We need to exhibit ``stable" zeroes of $\Gamma_0$ in $(0,+\infty)\tildemes \mathbb{C}$, which will persist under $L^\infty-$small perturbations yielding to zeroes of $\Gamma_\epsilonpsilon$ as required. The easiest case is given by the point $(\mu_0,0)$, that solves $\Gamma_0=0$ for $\mu_0=({8 (n+1)^3 I_0 \over \pi D_0})^{n+1\over 2(n+2)}>0$ in view of the assumption \epsilonqref{D0} and (see \epsilonqref{1228})
$$I_0:=\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}<0.$$
Regarding $\Gamma_0$ as a map from $\mathbb{R}^3$
into $\mathbb{R}^3$ and setting $\Gamma=\Gamma_1+i\Gamma_2$, $\Upsilon=\Upsilon_1+i\Upsilon_2$, we have that
$$D \Gamma_0(\mu_0,0)=\left(\begin{array}{ccc} \frac{2(n+2)}{n+1}\pi D_0 \mu_0 & 0 & 0 \\ 0 & \Gamma_1+\Upsilon_1 +{ n(2n+3)\over n+1} D_0 & \Upsilon_2-\Gamma_2 \\ 0 & \Gamma_2+\Upsilon_2 & \Gamma_1-\Upsilon_1 -{ n(2n+3)\over n+1} D_0 \epsilonnd{array}\right)$$
in view of \epsilonqref{1228} and
$$ \int_{\mathbb{R}^2}\frac{|y|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}dy=\pi \int_0^\infty \frac{\rho^{\frac{n}{n+1}}}{(1+\rho)^5}d\rho=\pi I_5^{\frac{n}{n+1}}.$$
Since
$$\hbox{det }D\Gamma_0(\mu_0,0)=\frac{2(n+2)}{n+1}\pi D_0 \mu_0 \left(|\Gamma |^2-\left|\Upsilon+{ n(2n+3) \over n+1} D_0\right|^2\right) \not= 0$$
in view of assumption (\ref{nondegenracy}), the point $(\mu_0,0)$ is
an isolated zero of $\Gamma_0$ with non-trivial local index. Since
$D\Gamma_0(\mu_0,0)$ is an invertible matrix, there exists $\nu>0$ small so that
$|D\Gamma_0(\mu_0,0)(\mu-\mu_0,\zeta)|\geq \nu
|(\mu-\mu_0,\zeta)|$. By a Taylor expansion of $\Gamma_0$ we can find $r_0>0$ small so that
$$|\Gamma_\epsilonpsilon(\mu,\zeta)|=|\Gamma_0(\mu,\zeta)|+o(1) \geq \nu |(\mu-\mu_0,\zeta)|+O\left((\mu-\mu_0)^2+|\zeta|^2\right)+o(1) \geq {\nu \over 2}|(\mu-\mu_0,\zeta)|$$
for all $(\mu,\zeta) \in \partial B_{r}(\mu_0,0)$ and all $r
\leq r_0$, for $\epsilonpsilon$ sufficiently small depending on
$r$. Then, the map $\Gamma_\epsilonpsilon$ has in $
B_{r_0}(\mu_0,0)$ well-defined degree for all $\epsilonpsilon$ small,
and it then coincides with the local index of $\Gamma_0$ at
$(\mu_0,0)$. In this way, the map $\Gamma_\epsilonpsilon$ has a zero of
the form $(\mu_\epsilonpsilon,\zeta_\epsilonpsilon)$ with $\mu_\epsilonpsilon \to \mu_0$
and $|\zeta_\epsilonpsilon|\to 0$ as $\epsilonpsilon \to 0$. Therefore, we have
solved \epsilonqref{solve3} for $\delta(\epsilon)=[\frac{(n+1)\epsilon^{n+1} }{|\mathcal{H}(0)|}]^{\frac{1}{n+2}} \mu_\epsilonpsilon$
and $a(\epsilon)=\deltalta(\epsilonpsilon)\zeta_\epsilonpsilon$, and the corresponding $w_\epsilon$ does solve (\ref{3}) and satisfy the required concentration property as
stated in Theorem \ref{main}.\qed
\epsilonnd{proof}
\begin{rem}
With some extra work, it is rather standard to see that \epsilonqref{solve1} does hold in a $C^1-$sense. For $\zeta$ in a bounded set, by IFT we can find $\epsilonpsilon>0$ small so that the first equation in $\Gamma_\epsilonpsilon(\mu,\zeta)=0$ can be solved by $\mu(\epsilonpsilon,\zeta)$, depending continuously in $\zeta$, so that
$$\mu(\epsilonpsilon,\zeta) \to \mu(\zeta):= \left(\frac{8 (n+1)^3}{\pi D_0} \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}\right)^{\frac{n+1}{2(n+2)}}$$
as $\epsilonpsilon \to 0$. In Appendix D it is proved that $ \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}<0$ for all $\zeta \in \mathbb{C}$, yielding to $\mu(\zeta)>0$ when $D_0<0$. Plugging $\mu(\epsilonpsilon,\zeta)$ into the second equation in $\Gamma_\epsilonpsilon(\mu,\zeta)=0$ we are reduced to find a ``stable" zero of
$$\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \left(\bar \Upsilon \zeta +\bar \Gamma \bar \zeta\right)-2 D_0 \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} y \over(1+|y|^2)^5}=0.$$
Notice that $\bar \Upsilon \zeta+\bar \Gamma \bar \zeta$ acts in real notation as the multiplication for the matrix
$$A=\left(\begin{array}{cc} \re (\Gamma+ \Upsilon)& \im (\Upsilon-\Gamma) \\ -\im (\Gamma+ \Upsilon) & \re (\Upsilon-\Gamma) \epsilonnd{array}\right).$$
Since by Appendix D we have that
$$\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}=f(|\zeta|),\qquad \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} y \over(1+|y|^2)^5}=g(|\zeta|)\zeta,$$
we can re-write the above equation as $A\zeta=\frac{2D_0 g(|\zeta|)}{f(|\zeta|)}\zeta$. Letting $(\lambdabdambda_1,e_1)$ be an eigen-pair of $A$ with $|e_1|=1$, we can find a solution $\zeta_0=|\zeta_0|e_1$ as soon as $|\zeta_0|\not= 0$ does solve $\frac{2D_0 g(|\zeta_0|)}{f(|\zeta_0|)}=\lambdabdambda_1$. Since by Appendix D we know that $f<0<g$, we can find solutions $(\mu_\epsilonpsilon,\zeta_\epsilonpsilon)$ of $\Gamma_\epsilonpsilon(\mu,\zeta)=0$ with $\zeta_\epsilonpsilon$ bifurcating from $\zeta_0 \not= 0$ as soon as one of the eigenvalues of $A$ positive and belongs to $\frac{2D_0 g}{f}(0,+\infty)$. In particular, by \epsilonqref{1228}-\epsilonqref{1902} and \epsilonqref{1903}-\epsilonqref{1904} we have that
$$\frac{g(0)}{f(0)}=-\frac{(2n+3)(3n+1)}{4(n+1)},\qquad \frac{g(|\zeta|)}{f(|\zeta|)} \to -\frac{51}{356} \hbox{ as }|\zeta|\to \infty,$$
and the condition above is fullfilled if one of the eigenvalues of $A$ lies in $(\frac{51}{178}|D_0|, \frac{(2n+3)(3n+1)}{2(n+1)}|D_0|)$.
\epsilonnd{rem}
\section{Examples and comments}\lambdabdabel{examples}
In this section, we will discuss the validity of \epsilonqref{pc}-\epsilonqref{nondegenracy} by providing some examples. Recall that in Theorem \ref{main} we were implicitly assuming that $\{p_1,\dots,p_N\} \subset {\Omega}ega$ and denoting for simplicity the concentration point $p$ as $0$. The assumption $\{p_1,\dots,p_N\} \subset {\Omega}ega$ simplifies the global construction in $\tildelde {\Omega}ega$ of $\mathcal{H}$ but \epsilonqref{pc}-\epsilonqref{nondegenracy} just require the local existence for such $\mathcal{H}$ at $0$ as well as for $\sigmama_0$ and $H^*$. In this respect, the only relevant assumption is that the concentration point lies in ${\Omega}ega$, and so we will provide examples with $0 \in \{\tildelde p_1,\dots,\tildelde p_N\} \subset \bar {\Omega}ega$. To be more precise, let us explain the general strategy we will adopt below. Since we are in a doubly-periodic setting, the configuration of the vortex points has to be periodic in $\bar {\Omega}ega$: for all $j=1,\dots,N$ the points $(\tildelde p_j +\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z})\cap \bar {\Omega}ega$ belong to $\{\tildelde p_1,\dots,\tildelde p_N\}$ and have all the same multiplicity. Then, we can find $J \subset \{1,\dots,N\}$ so that the points $\{\tildelde p_j:\, j \in J\}$ are all non-zero, distinct modulo $\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}$ and $\left(\{\tildelde p_j:\, j \in J\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right) \cap \bar {\Omega}ega=\{\tildelde p_1,\dots, \tildelde p_N\}\setminus \{0\}$. Take now a translation vector $\tau \in {\Omega}ega$ so that $\{\tildelde p_1+\tau,\dots,\tildelde p_N+\tau\}\cap \partial {\Omega}ega=\epsilonmptyset$, or equivalently $\left(\{\tildelde p_1,\dots,\tildelde p_N\}+\tau+ \omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)\cap \partial {\Omega}ega=\epsilonmptyset$. Then, it follows that $\left(\tildelde p_j+\tau+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)\cap {\Omega}ega$ is composed by a single point $p_j$, for all $j=1,\dots,N$. The idea is to apply Theorem \ref{main}, as formulated in Remark \ref{0923}, to the translated vortex configuration $\{ \tau\} \cup \{p_j:j \in J\}\subset {\Omega}ega$ with $\tau$ as concentration point. The validity of \epsilonqref{pc}-\epsilonqref{nondegenracy} in the translated situation will follow by appropriate assumptions on $\{\tildelde p_1,\dots,\tildelde p_N\}$.
\noindent Before stating our first result, let us introduce the notion of even vortex configuration: $-\tildelde p_j\in \{\tildelde p_1,\dots,\tildelde p_N\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}$ with the same multiplicity of $\tildelde p_j$, for all $j=1,\dots,N$. In the periodic case, notice that $\{\tildelde p_j: j \in J\}$ is still an even configuration. The validity of \epsilonqref{pc} is discussed in the following:
\begin{prop} \lambdabdabel{propp1}Assume $n$ is even and the periodic vortex configuration is even with $0 \in \{\tildelde p_1,\dots,\tildelde p_N\}$. Let $\mathcal{H}^\tau$ be the function corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset {\Omega}ega$, as given in Remark \ref{1149}. Then, there holds
$$\frac{d^k \mathcal{H}^\tau}{dz^k}(\tau)=0$$
for all odd number $k$.
\epsilonnd{prop}
\begin{proof} Since $-{\Omega}ega={\Omega}ega$ and the periodic vortex configuration $\{\tildelde p_1,\dots,\tildelde p_N\}$ is even, we have that $G(z)$, $H(z)$ and $e^{-4\pi \sum_{j \in J} n_j G(z,\tildelde p_j)}$ are even functions in view of $G(z,p)=G(z-p,0)$. So, it follows that $e^{4\pi(n+2)H(z-\tau)-4\pi \sum_{j\in J} n_j G(z,\tildelde p_j+\tau)}=e^{4\pi(n+2)H(z-\tau)-4\pi \sum_{j\in J} n_j G(z,p_j)}$ takes the same value at $\pm z+\tau$ for all $z \in {\Omega}ega$. The function $\mathcal{H}^\tau$ satisfies $|\mathcal{H}^\tau|(z+\tau)=|\mathcal{H}^\tau|(-z+\tau)$ for all $z \in {\Omega}ega$, and then $\mathcal{H}^\tau(z+\tau)=\mathcal{H}^\tau(-z+\tau)$ for all $z$ since $\mathcal{H}^\tau$ is an holomorphic function. By differentiating $k-$times at $\tau$, it yields to $\frac{d^k \mathcal{H}^\tau}{dz^k}(\tau)=0$ when $k$ is odd.\qed
\epsilonnd{proof}
\noindent The discussion of \epsilonqref{D0} is more interesting and will make use of the Weierstrass elliptic function $\wp$ to represent $D_0$ in case of an even periodic vortex configuration. Furthermore, when ${\Omega}ega$ is a rectangle, the points $p_j$'s are half-periods and all the multiplicities are even numbers, by some ideas in \cite{CLW} we will show that assumption \epsilonqref{D0} holds if and only if $\frac{n_3}{2}$ is an odd number, where $n_3$ is the multiplicity of the half-period $\frac{\omega_1+\omega_2}{2}$. Due to the presence of
high order derivatives ($2(n+1)$th order) in \epsilonqref{nondegenracy}, we will verify the validity of the ``non-degeneracy" condition in the simplest case $n=n_3=2$ and ${\Omega}ega$ a square torus. As we will see, the validity of \epsilonqref{nondegenracy} is just a computational matter which could be carried out in very generality for each case of interest.
\noindent We have the following representation formula:
\begin{prop} \lambdabdabel{1014}
Assume that the periodic vortex configuration is even with $0 \in \{\tildelde p_1,\dots,\tildelde p_N\}$, and $n_j$ is even when $\tildelde p_j \in \{{\omega_1\over 2}, {\omega_2\over 2},{\omega_1+\omega_2\over 2}\}$. Let $D_0^\tau$ be the coefficient corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset {\Omega}ega$, as given in Theorem \ref{main}. Then, for $\tau $small we have that $D_0^\tau$ is given by \epsilonqref{1846}, and does not depend on $\tau$.
\epsilonnd{prop}
\begin{proof} The Weierstrass elliptic function
$$\wp(z)=\frac{1}{z^2}+\sum_{(n,m)\not=(0,0)} \left( \frac{1}{(z+n\omega_1+m\omega_2)^2}-\frac{1}{(n\omega_1+m\omega_2)^2}\right)$$
is a doubly-periodic meromorphic function with a single pole in ${\Omega}ega$ at $0$ of multiplicity $2$. Moreover, the only branching points of $\wp$ are simple and given by the three half-periods ${\omega_1\over 2}$, ${\omega_2\over 2}$ and $\frac{\omega_3}{2}={\omega_1+\omega_2\over 2}$, i.e. $\wp'(\frac{\omega_j}{2})=0$ and $\wp''(\frac{\omega_j}{2})\not=0$ for $j=1,2,3$.
For $p\in \bar {\Omega}ega \setminus \{ 0\}$, note that $2\pi[2G(z,0)-G(z,p)-G(z,-p)]$ is a doubly-periodic harmonic
function in ${\Omega}ega$ with a singular behavior $-2\log|z|$ at $z=0$. Moreover, it behaves like $\log|z-p|$ at $z=p$ and $\log|z+p|$ at $z=-p$ when $p \not=\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}$, and like $2\log|z-p|$ if $p\in \{\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}\}$. Thus, we have that
$$2\pi[2G(z,0)-G(z,p)-G(z,-p)]=\log|\wp(z)-\wp(p)|+\text{const.}$$
no matter $p$ is an half-period or not, in view of $\wp(p)=\wp(-p)$, $\wp'(p)=-\wp'(-p)\not=0$ if $p \not=\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}$ and $\wp'(p)=0$, $\wp''(p)\not=0$ if $p\in \{\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}\}$. Since the periodic vortex configuration is even, take $I$ as the minimal subset of $J$ so that $\left(\{\tildelde p_k,-\tildelde p_k: \, k \in I\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right) \cap \{\tildelde p_j:\,j \in J\}=\{\tildelde p_j:\,j \in J\}$ and
$$\hat n_k=\left\{ \begin{array}{ll} \frac{n_k}{2} &\hbox{if }\tildelde p_k \hbox{ is an half-period}\\
n_k& \hbox{otherwise}. \epsilonnd{array}\right.$$
Letting $N=n+\sum_{j \in J} n_j$ and $u_0(z)=-4\pi nG(z,0)-4\pi \sum_{j \in J}n_j G(z,\tildelde p_j)$, assumption \epsilonqref{balance} implies that
$$u_0+8\pi(n+1)G(z,0)=4\pi\sum_{k \in I} \hat n_k [2G(z,0)- G(z,\tildelde p_k)-G(z,-\tildelde p_k)],$$
yielding to
\begin{equation*}
e^{u_0+8\pi(n+1)G(z,0)}= \hbox{const.}\: \big| \prod_{k \in I} (\wp(z)-\wp(\tildelde p_k))^{\hat n_k} \big|^2.
\epsilonnd{equation*}
The additional assumption that $n_j$ is even when $\tildelde p_j$ is an half-period is crucial to have $(\wp(z)-\wp(\tildelde p_j))^{\hat n_j}$ as a single-valued function. The function \begin{equation} \lambdabdabel{explicitH}
\mathcal{H}_0(z)= \lambdabdambda_0 \prod_{k \in I} (\wp(z)-\wp(\tildelde p_k))^{\hat n_k},\quad \lambdabdambda_0=e^{2\pi(n+2)H(0)-2\pi \sum_{j \in J} n_j G(0,\tildelde p_j)}
\epsilonnd{equation}
is an elliptic function with a single pole at $0$ of zero residue, which satisfies
\begin{equation} \lambdabdabel{1328}
|\mathcal{H}_0|^2=e^{u_0+8\pi(n+1)G(z,0)}.
\epsilonnd{equation}
Then
\begin{equation} \lambdabdabel{1010}
\sigmama_0(z)=-\left(\int^z \mathcal{H}_0(w) dw \right)^{-1}
=-\lambdabdambda_0^{-1}\left(\int^z \prod_{k \in I} (\wp(w)-\wp(\tildelde p_k))^{\hat n_k} dw \right)^{-1}
\epsilonnd{equation}
is a well-defined meromorphic function in $2{\Omega}ega$ which satisfies
\begin{equation} \lambdabdabel{1345}
\Big| \Big( \frac{1}{\sigmama_0} \Big)'(z) \Big|^2=|\mathcal{H}_0|^2(z)
= e^{u_0+8\pi(n+1)G(z,0)}.
\epsilonnd{equation}
Switching now to the translated vortex configuration $\{\tau\}\cup \{p_j:\,j \in J\}$, let us first notice that the total multiplicity is still $N$, and introduce $u_0^\tau=u_0(z-\tau)=-4\pi nG(z,\tau)-4\pi \sum_{j \in J}n_j G(z,p_j)$. We have that $\mathcal{H}_0^\tau(z)=\mathcal{H}_0(z-\tau)$ is a meromorphic function in ${\Omega}ega$ with
$$|\mathcal{H}_0^\tau|^2=e^{u_0^\tau+8\pi(n+1)G(z,\tau)}$$
in view of (\ref{1328}). Since such a function $\mathcal{H}_0^\tau$ is unique up to rotations, we can assume that $\mathcal{H}_0^\tau$ coincides with the function $\mathcal{H}_0$ corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset {\Omega}ega$, as given in Theorem \ref{main}. Setting $\mathcal{H}(z)=z^{n+2}\mathcal{H}_0(z)$, we also have that
\begin{equation} \lambdabdabel{1344}
\mathcal{H}^\tau(z)=\mathcal{H} (z-\tau)
\epsilonnd{equation}
for all $z \in {\Omega}ega$. Letting
$$\sigmama_0^\tau(z)=-\left(\int^z \mathcal{H}_0^\tau(w) dw \right)^{-1}$$
with the correct choice of the constant in the integration$\int^z$, we easily deduce that
\begin{equation} \lambdabdabel{0935}
\sigmama_0^\tau(z)=\sigmama_0(z-\tau)
\epsilonnd{equation}
for all $z \in {\Omega}ega$ in view of $(\frac{1}{\sigmama_0^\tau})'(z)=(\frac{1}{\sigmama_0})'(z-\tau)$. Since $(\sigmama_0^\tau)^{-1}(B_\rho(0))-\tau=(\sigmama_0)^{-1}(B_\rho(0))$ in view of \epsilonqref{0935}, according to \epsilonqref{D0} let us re-write $D_0^\tau$ as
\begin{eqnarray*}
\pi D_0^\tau&=&\int_{{\Omega}ega \setminus (\sigmama_0^\tau)^{-1} (B_\rho(0))} e^{u_0^\tau+8\pi(n+1)G(z,\tau)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4}\\
&=&
\int_{({\Omega}ega-\tau) \setminus (\sigmama_0)^{-1}(B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4}\\
&=&\int_{{\Omega}ega \setminus (\sigmama_0)^{-1}(B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4}
\epsilonnd{eqnarray*}
by the double-periodicity of $e^{u_0+8\pi(n+1)G(z,0)}$, once we assume for $\tau$ small that $(\sigmama_0)^{-1}(B_\rho(0)) \subset {\Omega}ega \cap ({\Omega}ega-\tau)$. By \epsilonqref{1345} and the change of variable $z \to \frac{1}{\sigmama_0}(z)$ we get that
\begin{eqnarray}
\pi D_0^\tau&=&\pi D_0= \int_{{\Omega}ega \setminus (\sigmama_0)^{-1}(B_\rho(0)) } \Big|\left(\frac{1}{\sigmama_0}\right)'\Big|^2
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4} \nonumber \\
&=&\hbox{Area } \left[ \frac{1}{\sigmama_0}\left({\Omega}ega \setminus \sigmama_0^{-1} (B_\rho(0)) \right) \right] - (n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right). \lambdabdabel{1846}
\epsilonnd{eqnarray}
By the Cauchy argument principle the number of pre-images in ${\Omega}ega \setminus \sigmama_0^{-1}(B_\rho(0))$ through the map $\frac{1}{\sigmama_0}$ is constant for all values in each connected component of $\mathbb{C} \setminus \left(\frac{1}{\sigmama_0}(\partial {\Omega}ega) \cup \partial B_{\frac{1}{\rho}}(0) \right)$, and the area of each of these components has to be counted in \epsilonqref{1846} according to the multiplicity of pre-images. \qed
\epsilonnd{proof}
\noindent Thanks to \epsilonqref{1846}, we can now discuss the validity of \epsilonqref{D0}.
\begin{prop} \lambdabdabel{propp2}
Let ${\Omega}$ be a rectangle, and assume that the vortex configuration is the periodic one generated by $\{0,{\omega_1\over
2},{\omega_2 \over 2},{\omega_1+\omega_2 \over 2}\}$ with even multiplicities $n,n_1,n_2,n_3 \geq 0$. Suppose that
\begin{equation}\lambdabdabel{balanceex}
\frac{n_1}{2}+\frac{n_2}{2}+\frac{n_3}{2}=\frac{n}{2}+1.
\epsilonnd{equation}
Given $D_0^\tau$ as in Propostion \ref{1014}, then $D_0^\tau<0$ $(>0)$when $\frac{n_3}{2}$ is odd $(\hbox{even})$.
\epsilonnd{prop}
\begin{proof}
The balance condition \epsilonqref{balance} is satisfied in view of \epsilonqref{balanceex}. Let $\tildelde p_1={\omega_1\over 2}$, $\tildelde p_2={\omega_2\over 2}$ and $\tildelde p_3={\omega_1+\omega_2\over 2}$ be the three half-periods. When ${\Omega}ega$ is a rectangle, the function $\wp$ takes real values on $\partial {\Omega}ega$ and $\wp''(\tildelde p_j)>0$ for $j=1,2$, $\wp''(\tildelde p_3)<0$. As a consequence, we have that
\begin{equation} \lambdabdabel{2026}
\wp(\tildelde p_1)-\wp(z),\: \wp(z)-\wp(\tildelde p_2),\: \wp(\pm \tildelde p_1+it)-\wp(\tildelde p_3) ,\: \wp(\tildelde p_3)-\wp(\pm \tildelde p_2+t) \ge 0 \epsilonnd{equation}
for all $z\in{\partial}{\Omega}$ and $t\in{\mathbb{R}}$. Write $\sigmama_0(z)$ in \epsilonqref{1010} as
$$ \sigmama_0(z)=(-1)^{\frac{n+n_2}{2}}\lambdabdambda_0^{-1} \left(\int^z (\wp(\tildelde p_1)-\wp(w))^{\frac{n_1}{2}}(\wp(w)-\wp(\tildelde p_2))^{\frac{n_2}{2}}(\wp(\tildelde p_3)-\wp(w))^{\frac{n_3}{2}} dw \right)^{-1}$$
in view of \epsilonqref{balanceex}. Since
$$\frac{d}{dt}\left[ \frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0 (\pm \tildelde p_2+t)} \right]= \lambdabdambda_0 (\wp(\tildelde p_1)-\wp(\pm \tildelde p_2+t))^{\frac{n_1}{2}}(\wp(\pm \tildelde p_2+t)-\wp(\tildelde p_2))^{\frac{n_2}{2}}(\wp(\tildelde p_3)-\wp(\pm \tildelde p_2+t))^{\frac{n_3}{2}} \geq 0$$
in view of \epsilonqref{2026}, the function $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0}$ maps the horizontal sides of $\partial {\Omega}ega$ into horizontal segments with same orientation. In the same way, the vertical sides of $\partial {\Omega}ega$ are mapped into vertical segments with same/opposite orientation depending on whether $\frac{n_3}{2}$ is an even/odd number. So, $T:=\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0} (\partial {\Omega}ega)$ is still a rectangle with same/opposite orientation and $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0 (\tildelde p_3)}$ is the right upper/lower corner of $T$ depending on whether $\frac{n_3}{2}$ is an even/odd number. For $\rho$ small, we then have that $\mathbb{C} \setminus \left(\frac{1}{\sigmama_0}(\partial {\Omega}ega) \cup \partial B_{\frac{1}{\rho}}(0) \right)$ has three connected components: the interior ${\Omega}ega'$ of $(-1)^{\frac{n+n_2}{2}} T$, $B_{\frac{1}{\rho}}(0) \setminus \overline{{\Omega}ega'}$ and $\mathbb{C} \setminus \overline{B_{\frac{1}{\rho}}(0)}$. By Lemma \ref{gomme} we have that values in $B_{\frac{1}{\rho}}(0) \setminus \overline{{\Omega}ega'}$, $\mathbb{C} \setminus \overline{B_{\frac{1}{\rho}}(0)}$ have exactly $n+1$, $0$ pre-images in ${\Omega}ega \setminus \sigmama_0^{-1}(B_\rho(0))$ through the map $\frac{1}{\sigmama_0}$, respectively. By \epsilonqref{1846} we have that $\pi D_0^\tau=[k - (n+1)] \hbox{Area} ({\Omega}ega')$, where $k$ is the
number of pre-images corresponding to values in ${\Omega}ega'$.
\noindent Since $\wp(z)-\wp(\tildelde p_3)={\wp''(\tildelde p_3)\over 2}(z-\tildelde p_3)^2+O(|z-\tildelde p_3|^3)$ as $z \to \tildelde p_3$, we obtain that
$$\left[\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0}\right]'(z)= \mu (z-\tildelde p_3)^{n_3}+O(|z-\tildelde p_3|^{n_3+1})$$
and
$$\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0(z)}-\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0(\tildelde p_3)}=\mu {(z-\tildelde p_3)^{n_3+1}\over
n_3+1}+O(|z-\tildelde p_3|^{n_3+2})$$
as $z \to \tildelde p_3$, where $\mu:=\lambdabdambda_0 \left(-{\wp''(\tildelde p_3)\over
2}\right)^{\frac{n_3}{2}} [\wp(\tildelde p_1)-\wp(\tildelde p_3)]^{\frac{n_1}{2}}[\wp(\tildelde p_3)-\wp(\tildelde p_2)]^{\frac{n_2}{2}}>0$. When $\frac{n_3}{2}$ is an odd number, $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0 (\tildelde p_3)}$ is the right lower corner of $T$ and the function $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigmama_0}$ maps $\{z=\tildelde p_3+\rho e^{i\theta}\mid
\pi\le\theta\le {3\pi\over 2}, 0\le\rho<\rho_0\}$ onto a
region whose part inside/outside $T$ is covered ${n_3-2\over 4}$/${n_3-2\over 4}+1$ times, respectively, in view of
$$(n_3+1)\pi\le(n_3+1)\theta\le(n_3+1){3\pi\over
2}=(n_3+1)\pi+2\pi {n_3-2\over 4}+\pi+{\pi\over 2}.$$
Hence, near $\tildelde p_3$ the map $\frac{1}{\sigmama_0}$ covers ${n_3-2\over 4}$/${n_3-2\over 4}+1$ times the interior/exterior part of ${\Omega}ega'$ near $\frac{1}{\sigmama_0(\tildelde p_3)}$. Since $\frac{1}{\sigmama_0}$ covers $n+1$ times every values in
$B_{\frac{1}{\rho}}(0)\setminus \overline{{\Omega}ega'}$, there should be
$n-{n_3-2\over 4}$ distinct points $x\in {\Omega}\setminus \sigmama_0^{-1}(B_\rho(0))$, away from $\tildelde p_1,\tildelde p_2, \tildelde p_3$, so that $\sigmama_0(x)=\sigmama_0(\tildelde p_3)$. Since
$\sigmama_0'(x) \not= 0$ if $x\not= \tildelde p_1,\tildelde p_2,\tildelde p_3$, it follows that around any such $x$
$\frac{1}{\sigmama_0}$ is a local homeomorphism, and then $\frac{1}{\sigmama_0}$ covers exactly $n$/$n+1$ times the interior/exterior part of ${\Omega}ega'$ near $\frac{1}{\sigmama_0(\tildelde p_3)}$. Hence, it
follows that $k=n$ and $\pi D_0^\tau=-\hbox{Area} ({\Omega}ega')<0$. When $\frac{n_3}{2}$ is even, in a similar way we get that $k=n+2$ and $\pi D_0^\tau=\hbox{Area} ({\Omega}ega')>0$.\qed
\epsilonnd{proof}
\noindent Now, to discuss \epsilonqref{nondegenracy} we further restrict the attention to the case $n=n_3=2$ to get
\begin{prop} \lambdabdabel{propp3}
Let ${\Omega}$ be a square of side $a$, $a>0$, and assume that the vortex configuration is the periodic one generated by $\{0,{a\over
2},{ia \over 2},{a+ia \over 2}\}$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa).
Then, for $\tau \in {\Omega}ega$ assumption \epsilonqref{nondegenracy} does hold for the vortex configuration $\{\tau\}\cup \{p_j:\, j \in J\}\subset {\Omega}ega$.
\epsilonnd{prop}
\begin{proof}
We are restricting the attention to the cases $(n_1,n_2)=(2,0),\, (0,2)$ for they are the only possibilities to have even multiplicities satisfying \epsilonqref{balanceex} for $2,n_1,n_2,2$. Letting $\tildelde p_1={a \over 2}$, $\tildelde p_2={ia \over 2}$ and $\tildelde p_3={a+ia \over 2}$ be the three half-periods, the ``non-degeneracy condition" reads as
\begin{equation}\lambdabdabel{ceae2}
\bigg| 3 (\mathcal{H}^\tau)''(\tau )f_3'(\tau)+\mathcal{H}^\tau(\tau) f_3'''(\tau) \bigg|\ne
\left|{6\pi\over a^2} \overline{b_{3}} (\mathcal{H}^\tau)''(\tau)-{28\over 3} D_0^\tau \right|
\epsilonnd{equation}
in view of $(\mathcal{H}^\tau)'(\tau)=(\mathcal{H}^\tau)'''(\tau)=0$ by Proposition \ref{propp1},
where
$$f_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} \left[2\log \frac{w-q_0^\tau(z)}{(q_0^\tau)^{-1}(w)-z}+4\pi
H^*(z-(q_0^\tau)^{-1}(w))\right](0)\:,\quad b_l=\frac{1}{l!}\frac{d^l (q_0^\tau)^{-1}}{dw^l}(0).$$
Since $\sigmama_0^\tau(z)=\sigmama_0(z-\tau)$ by \epsilonqref{0935}, we deduce that $q_0^\tau(z)=q_0(z-\tau)$ and $(q_0^\tau)^{-1}=\tau +q_0^{-1}$, where $q_0=z [\frac{\sigmama_0(z)}
{z^{n+1}}]^{\frac{1}{n+1}}$ is defined out of $\sigmama_0$ as in Appendix A. Since $\mathcal{H}^\tau(z)=\mathcal{H}(z-\tau)$ in view of (\ref{1344}), by \epsilonqref{1846} the ``non-degeneracy condition" \epsilonqref{ceae2} gets re-written in the original variables as:
\begin{equation}\lambdabdabel{ceae21357}
\bigg| 3 \mathcal{H}''(0)f_3'(0)+\lambdabdambda_0 f_3'''(0) \bigg|\ne
\left|{6\pi\over a^2} \overline{b_{3}} \mathcal{H}''(0)-{28\over 3} D_0 \right|
\epsilonnd{equation}
in view of $\mathcal{H}(0)=\lambdabdambda_0$ (see \epsilonqref{explicitH}), where
$$f_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} \left[2\log \frac{w-q_0(z)}{q_0^{-1}(w)-z}+4\pi
H^*(z-q_0^{-1}(w))\right](0)\:,\quad b_l=\frac{1}{l!}\frac{d^l q_0^{-1}}{dw^l}(0).$$
Since ${d^k \mathcal{H} \over dz^k}(0)=0$ for all odd $k\in{\mathbb{N}}$, we have that
\begin{eqnarray*}
\frac{z^3}{\sigmama_0(z)}=\frac{\lambdabdambda_0}{3}+\frac{\mathcal{H}''(0)}{2} z^2-\frac{\mathcal{H}^{(4)}(0)}{24}z^4 -\frac{\mathcal{H}^{(6)}(0)}{2160 }z^6+
O(z^8),
\epsilonnd{eqnarray*}
and then
$$\sigmama_0(z)=\frac{3}{\lambdabdambda_0}z^3 -\frac{9\mathcal{H}''(0)}{2\lambdabdambda_0^2 }z^5+O(z^7),\quad
q_0(z)=\frac{3^{\frac{1}{3}}}{\lambdabdambda_0^{\frac{1}{3}}}z-\frac{3^{\frac{1}{3}}\mathcal{H}''(0)}{2 \lambdabdambda_0^{\frac{4}{3}}}z^3+O(z^5),\quad
q_0^{-1}(w)=\frac{\lambdabdambda_0^{\frac{1}{3}}}{3^{\frac{1}{3}}}w+\frac{\mathcal{H}''(0)}{6}w^3+O(w^5)$$
as $z,w \to 0$. Direct computation shows that $b_3=\frac{\mathcal{H}''(0)}{6}$
and
\begin{eqnarray*}
f_3(z)&=&-\frac{2}{3\sigmama_0(z)}+\frac{2\lambdabdambda_0 }{9 z^3}+\frac{2b_3}{z}-\frac{2\pi \lambdabdambda_0}{9}(H^*)'''(z)-4\pi b_3 (H^*)'(z)\\
&=&\frac{\mathcal{H}^{(4)}(0)}{36}z +\frac{\mathcal{H}^{(6)}(0)}{3240}z^3-\frac{2\pi \lambdabdambda_0}{9}(H^*)'''(z)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)'(z)+O(z^5)
\epsilonnd{eqnarray*}
as $z \to 0$. Since then
$$f_3'(0)=\frac{\mathcal{H}^{(4)}(0)}{36}-\frac{2\pi \lambdabdambda_0}{9}(H^*)^{(4)}(0)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)''(0),\quad f_3'''(0)=\frac{\mathcal{H}^{(6)}(0)}{540} -\frac{2\pi \lambdabdambda_0}{9}(H^*)^{(6)}(0)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)^{(4)}(0),$$
condition \epsilonqref{ceae21357} is equivalent to
\begin{eqnarray*}
&&\bigg| \frac{\mathcal{H}''(0)\mathcal{H}^{(4)}(0)}{12}+\frac{\lambdabdambda_0 \mathcal{H}^{(6)}(0)}{540}
-2\pi (\mathcal{H}''(0))^2 (H^*)''(0)-\frac{4 \pi \lambdabdambda_0}{3} \mathcal{H}''(0) (H^*)^{(4)}(0)-\frac{2\pi \lambdabdambda_0^2}{9}(H^*)^{(6)}(0)\bigg|\\
&&\ne
\left|{\pi\over a^2} |\mathcal{H}''(0)|^2-{28\over 3} D_0\right|.
\epsilonnd{eqnarray*}
By the explicit expression \epsilonqref{explicitH} of $\mathcal{H}_0$ we have that
$$\mathcal{H}(z)= \lambdabdambda_0 z^4 (\wp(z)-\wp(\tildelde p_1)) (\wp(z)-\wp(\tildelde p_3)).$$
Replacing $\mathcal{H}$ with $\frac{\mathcal{H}}{\lambdabdambda_0}$, we can assume $\lambdabdambda_0=1$ and simply study the stronger condition
\begin{eqnarray}\lambdabdabel{yyy}
\hspace{-0.2cm} \bigg| \frac{\mathcal{H}''(0)\mathcal{H}^{(4)}(0)}{4}+\frac{\mathcal{H}^{(6)}(0)}{180}
-6\pi (\mathcal{H}''(0))^2 (H^*)''(0)- 4 \pi \mathcal{H}''(0) (H^*)^{(4)}(0)-\frac{2\pi}{3}(H^*)^{(6)}(0)\bigg|< {3 \pi\over a^2} |\mathcal{H}''(0)|^2\epsilonnd{eqnarray}
in view of Proposition \ref{1014} and \epsilonqref{1846}. Letting $G_l=\displaystyle \sum_{(n,m) \not= (0,0)}{1\over (n\omega_1+m\omega_2)^l}$, $l\geq 3$, be the Eisenstein series, the Laurent expansion of $\wp$ near $0$ simply re-writes as
$$\wp(z)={1\over
z^2}+\sum_{l=1}^\infty(2l+1)G_{2l+2}z^{2l},$$
and then
\begin{eqnarray*}
\mathcal{H}(z)=1-(\wp(\tildelde p_1)+\wp(\tildelde p_3))z^2+\left(\wp(\tildelde p_1)\wp(\tildelde p_3)+6 G_4 \right) z^4
+\left(10 G_6 -3G_4 \wp(\tildelde p_1)-3G_4 \wp(\tildelde p_3)\right) z^6+O(z^8)
\epsilonnd{eqnarray*}
as $z \to 0$. Letting $e_j=\wp(\tildelde p_j)$ for $j=1,2,3$, recall that
\begin{equation} \lambdabdabel{propej}
e_2<e_3\le0<e_1,\quad e_1+e_2+e_3=0,\quad 15 G_4=-(e_1e_2+e_1e_3+e_2e_3),\quad 35 G_6=e_1 e_2 e_3,
\epsilonnd{equation}
with $e_3=0$ if and only if ${\Omega}ega$ is a square (see \cite{AbSte}). By the expansion of $\mathcal{H}$ and \epsilonqref{propej}, we deduce that
$$\mathcal{H}''(0)=2 e_2,\:\mathcal{H}^{(4)}(0)=24(e_1 e_3 +6 G_4),\:\mathcal{H}^{(6)}(0)= 720(10 G_6 +3G_4 e_2),$$
and condition \epsilonqref{yyy} gets re-written as
\begin{eqnarray} \lambdabdabel{ceae3}
\bigg| 460 G_6 +84 G_4e_2-24\pi e_2^2 (H^*)''(0)-8\pi e_2 (H^*)^{(4)}(0) -\frac{2\pi}{3} (H^*)^{(6)}(0)\bigg|<
{12\pi\over a^2} e_2^2
\epsilonnd{eqnarray}
in view of \epsilonqref{propej}.
\noindent From an explicit formula for the Green's function (see \cite{ChO}) we have that
\begin{equation*}
\begin{split}
H(z)-{|z|^2\over 4|{\Omega}ega|}
=\re\left(-{z^2\over 4 a^2}+{iz\over 2a}+{1\over 12}\right)-\frac{1}{2\pi}\log\left|{1-e\left(\frac{z}{a}\right)\over z}
\tildemes\prod_{k=1}^\infty\left(1-e\left(\frac{kai+z}{a}\right)\right)\left(1-e\left(\frac{kai-z}{a}\right)\right)\right|,
\epsilonnd{split}
\epsilonnd{equation*}
where $e(z)=e^{2\pi iz}$, yielding to
\begin{equation*}
H^*(z)= -{z^2\over 4 a^2}+{iz\over 2a}+{1\over 12}-\frac{1}{2\pi}
\log \left[\left({1-e\left(\frac{z}{a}\right)\over z}\right)
\tildemes\prod_{k=1}^\infty\left(1-e\left(\frac{kai+z}{a}\right)\right)\left(1-e\left(\frac{kai-z}{a}\right)\right) \right].\epsilonnd{equation*}
Direct, but tedious, computations show that
\begin{eqnarray*}
&&(H^*)''(0)=-{1 \over 2 a^2}+{\pi\over 6a^2}-{4\pi \over a^2}\sum_{k=1}^\infty \lambdabdambda_k(\lambdabdambda_k+1),\quad (H^*)^{(4)}(0)={\pi^3\over 15a^4}+{16\pi^3\over a^4}\sum_{k=1}^\infty \lambdabdambda_k(\lambdabdambda_k+1)(6\lambdabda_k^2+6\lambdabda_k+1)\\
&& (H^*)^{(6)}(0)={8\pi^5\over 63a^6}-{64\pi^5\over a^6}\sum_{k=1}^\infty \lambdabdambda_k(\lambdabdambda_k+1)(120\lambdabda_k^4+240\lambdabda_k^3+150\lambdabda_k^2+30\lambdabda_k+1),
\epsilonnd{eqnarray*}
where $\lambdabda_k:={1\over e^{2\pi k}-1}$. On a square torus the Green function $G(z,0)$ has an additional symmetry, the invariance under $\frac{\pi}{2}-$rotations. Therefore, $H^*(iz)=H^*(z)$ for all $z\in{\Omega}$, and then $(H^*)''(0)=(H^*)^{(6)}(0)=0$. Since $e_3=G_6=0$, condition \epsilonqref{ceae3} becomes
\begin{eqnarray} \lambdabdabel{ceae4}
\bigg| \frac{28}{5}e_1^2 -8\pi (H^*)^{(4)}(0) \bigg|< {12\pi\over a^2} e_1
\epsilonnd{eqnarray}
in view of \epsilonqref{propej} and $e_1=-e_2>0$. From the study of the Weierstrass function $\wp$ it is known that (see \cite{Ap})
\begin{equation*}
\sum_{(n,m)\ne
(0,0)} {1\over (n+m\tau)^4}={\pi^4\over
45}+{16\pi^4\over 3}\sum_{m,k=1}^\infty k^3e^{2\pi
i km\tau} \epsilonnd{equation*}
for $\tau\in{\mathbb C}$ with $\im \tau >0$. The choice $\tau=i$ yields to
$$15 a^4 G_4=a^4 e_1^2={\pi^4\over
3}+80 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi
km}$$
in view of \epsilonqref{propej}, which turns \epsilonqref{ceae4} into
\begin{eqnarray} \lambdabdabel{ceae5}
\hspace{-0.3cm}\bigg| {\pi^4 \over
3}+112 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi
km} -32\pi^4 \sum_{k=1}^\infty \lambdabdambda_k(\lambdabdambda_k+1)(6\lambdabda_k^2+6\lambdabda_k+1) \bigg| <
3\pi \sqrt{{\pi^4\over
3}+80 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi
km}}.
\epsilonnd{eqnarray}
Since numerically we can approximately compute
$$32 \pi^4\sum_{k=1}^\infty
\lambdabdambda_k(\lambdabdambda_k+1)(6\lambdabda_k^2+6\lambdabda_k+1)\approx 5,9194
\qquad 80 \pi^4 \sum_{m,k=1}^\infty k^3e^{-2\pi
km} \approx 14,7985,$$
we get the validity of \epsilonqref{ceae5}, or equivalently \epsilonqref{nondegenracy} for the vortex configuration $\{\tau\}\cup \{p_j:\, j \in J\}\subset {\Omega}ega$. \qed
\epsilonnd{proof}
\noindent As a combination of Propositions \ref{propp1}, \ref{propp2} and \ref{propp3} we finally get that
\begin{thm} \lambdabdabel{thmexample}
Let ${\Omega}$ be a square of side $a$, $a>0$, and assume that the vortex configuration is the periodic one generated by $\{0,{a\over
2},{ia \over 2},{a+ia \over 2}\}$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa).
Then, for $\tau$ small the assumption of Theorem \ref{main} do hold for the slightly translated vortex configuration $\{-\tau(1+i), -\tau(1+i)+\frac{a}{2}, -\tau(1+i)+\frac{ia}{2}, -\tau(1+i)+\frac{a+ia}{2} \}$. In particular, for $\epsilon>0$ small we can find $N-$condensate $(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $|\phi_\epsilon| \to 0$ in $C(\bar
{\Omega}ega)$ and
\begin{equation} \lambdabdabel{magconc}
(F_{12})_\epsilon \rightharpoonup 12\pi \deltalta_0
\epsilonnd{equation}
weakly in the sense of measures, as $\epsilon \to 0$, where $\{0,{a\over 2},{ia \over 2},{a+ia \over 2}\}$ are the zeroes of $\phi_\epsilon$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa).
\epsilonnd{thm}
\noindent As a final remark, observe that for $n=0$ Theorem \ref{main} essentially recovers the result in
\cite{LinYan1} concerning single-point concentration in any torus
${\Omega}$ (see also \cite{EsFi}). Notice that $n=0$ corresponds to
have that the concentration point $0$ is not really a singular
point and a more simple approach is possible as in the
above-mentioned papers. By \epsilonqref{balance} the total multiplicity $N$
is $2$ produced by two vortex-points $p_1,p_2\in {\Omega}ega \setminus \{0 \}$. Assumption
\epsilonqref{pc} is equivalent to have $(\log \mathcal{H})'(0)=0$. By
the Cauchy-Riemann equations, the last condition can be just
re-written as
$$ \nabla [2 \re \log \mathcal{H}](0)=\nabla \log |\mathcal{H}|^2(0)=\nabla [8\pi H+u_0](0)=0.$$
Since $\nabla H(0)=0$ in view of $H(z)=H(-z)$, we have that \epsilonqref{pc} simply reads as: $0$ is a critical point of $u_0$. As far as \epsilonqref{D0}, notice that $D_0$ does not depend on $\rho>0$ small for
\begin{eqnarray*}
\int_{\sigmama_0^{-1} (B_\rho(0)) \setminus \sigmama_0^{-1} (B_r(0))}
e^{u_0+8\pi G(z,0)} -
\int_{B_\rho(0) \setminus B_r(0)}
\frac{dy}{|y|^4}=
\hbox{Area}\left( B_{\frac{1}{r}}(0) \setminus B_{\frac{1}{\rho}}(0) \right)-\pi \Big(\frac{1}{r^2}-\frac{1}{\rho^2}\Big)=0
\epsilonnd{eqnarray*}
for all $0<r\leq \rho$, in view of \epsilonqref{eq sigma0} with $c_0=0$. Therefore, $D_0$ can be re-written as
\begin{eqnarray*}
D_0 = \frac{1}{\pi}\left[\int_{{\Omega}ega \setminus \sigmama_0^{-1} (B_\rho(0))}
e^{u_0+8\pi G(z,0)} -
\int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{dy}{|y|^4}\right]=\frac{1}{\pi}\lim_{r\to0} \bigg[\int_{{\Omega}ega \setminus \sigmama_0^{-1}(B_r(0))}
{e^{8\pi H(z,0)+u_0} \over |z|^4} -
\int_{\mathbb{R}^2 \setminus B_{r}(0)} \frac{1}{|y|^4}\bigg].
\epsilonnd{eqnarray*}
Since $\sigmama_0(z)=\frac{z}{\lambdabdambda_0}+\frac{\mathcal{H}''(0)}{2\lambdabdambda_0^2}z^3+O(|z|^5)$ and $\sigmama_0^{-1}(z)=\lambdabdambda_0z+O(|z|^3)$ with $\lambdabdambda_0=e^{4\pi H(0)-\frac{u_0(0)}{2}}$, notice that $B_{\lambdabdambda_0r-Cr^3}(0)\subset \sigmama_0^{-1}(B_r(0)) \subset B_{\lambdabdambda_0 r+Cr^3}(0)$ for all $r>0$ small, for some constant $C>0$. Thus, there holds
\begin{eqnarray*}
&&\bigg|\int_{{\Omega}ega \setminus \sigmama_0^{-1}(B_r(0))}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]}-\int_{{\Omega}ega \setminus B_{\lambdabdambda_0 r}(0)}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]}\bigg|\\
&&=O\left( \int_{ B_{\lambdabdambda_0 r+Cr^3}(0) \setminus B_{\lambdabdambda_0 r-Cr^3}(0)} \frac{1}{|z|^2}\right) =o(1)
\epsilonnd{eqnarray*}
as $r \to 0$ in view of $\nabla [8\pi H+u_0](0)=0$, yielding to the same expression for $D_0$ as in \cite{EsFi,LinYan1}:
$$D_0=\frac{\lambdabdambda_0^2}{\pi}\lim_{r\to0} \bigg[\int_{{\Omega}ega \setminus B_r(0)}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]} -
\int_{\mathbb{R}^2 \setminus B_{r}(0)} \frac{1}{|y|^4}\bigg].$$
The ``non-degeneracy condition" \epsilonqref{nondegenracy} reads as
$$\left| \frac{\mathcal{H}''(0)}{\mathcal{H}(0)}-4\pi (H^*)''(0)\right|=\left| (\log \mathcal{H})''(0)-4\pi (H^*)''(0)\right|\ne {2\pi\over |{\Omega}ega|},$$
in view of $\sigmama_0=q_0$, $b_1=\lambdabdambda_0$, $f_1(z)=-4\pi \lambdabdambda_0 (H^*)'(z)+\frac{2 \lambdabdambda_0}{z}-\frac{2}{\sigmama_0(z)}$ and $\mathcal{H}'(0)=0$. Setting
$\mathcal{H}_1(z)=e^{-4\pi H^*(z)} \mathcal{H}(z)$, we have that $|\mathcal{H}_1(z)|^2=e^{u_0+\frac{2\pi}{|{\Omega}ega|}|z|^2}$
and
\begin{eqnarray*}(\log \mathcal{H})''(0)-4\pi (H^*)''(0)&=&(\log \mathcal{H}_1)''(0)=2 (\re \log \mathcal{H}_1)''(0)= (\log |\mathcal{H}_1|^2 )''(0)=\Big(u_0+\frac{2 \pi}{|{\Omega}ega|} |z|^2\Big)''(0)\\
&=& \frac{1}{4}[(u_0)_{xx}(0)-(u_0)_{yy}(0)-2i (u_0)_{xy}(0)]
\epsilonnd{eqnarray*}
in view of \epsilonqref{definitionH}-\epsilonqref{keyrelationH}, and the above condition turns into
\begin{eqnarray*} 0 &\not=&\frac{1}{16}\left| (u_0)_{xx}(0)-(u_0)_{yy}(0)-2i (u_0)_{xy}(0) \right|^2 -{4\pi^2\over |{\Omega}ega|^2}=
\frac{1}{16} \left((u_0)_{xx}(0)-(u_0)_{yy}(0)\right)^2+\frac{1}{4}(u_0)^2_{xy}(0) -{4\pi^2\over|{\Omega}ega|^2}\\
&=&\frac{1}{16}(\displaystyleelta u_0)^2 (0)-\frac{1}{4}\hbox{det}\,D^2 u_0(0)-{4\pi^2\over |{\Omega}ega|^2}=-\frac{1}{4}\hbox{det}\,D^2 u_0(0).
\epsilonnd{eqnarray*}
In conclusion, when $n=0$ the assumptions in Theorem \ref{main} are equivalent to have $0$ as a non-degenerate critical point of $u_0(z)=-4\pi G(z,p_1)-4\pi G(z,p_2)$ with $D_0<0$.
\section{A more general result}\lambdabdabel{general}
In this section we deal with the case $m\geq 2$ in Theorem \ref{mainbb}. For more clearness, let us denote the concentration points as $\xi_l$, $l=1,\dots,m$, the remaining points in the vortex set as $p_j$, and by $n_l,n_j$ the corresponding multiplicities.
\noindent From section $2$ recall that $H(z)= G(z,0)+\frac{1}{2\pi} \log|z|$ is a smooth function in $2{\Omega}ega$ with $\displaystyleelta H=\frac{1}{|{\Omega}ega|}$, and $H^*$ is an holomorphic function in $2 {\Omega}ega$ with $\re H^*=H-\frac{|z|^2}{4|{\Omega}ega|}$. Up to a translation, we are assuming that $p_j \in {\Omega}ega$ for all $j=1,\dots,N$, and taking $\tildelde {\Omega}ega$ close to ${\Omega}ega$ so that $\tildelde {\Omega}ega -p_j \subset 2 {\Omega}ega$ for all $j=1,\dots,N$. Arguing as for \epsilonqref{definitionH}, the function
\begin{eqnarray*}
&& \mathcal{H}(z)= \prod_j (z-p_j)^{n_j} \hbox{exp}\left( 4\pi \sum_{l=1}^m (n_l+1) H^*(z-\xi_l) -2\pi\sum_{j=1}^N H^*(z-p_j)\right.\\
&&\left. +\frac{\pi}{|{\Omega}ega|} \sum_{l=1}^m (n_l+1)(\xi_l-2z) \overline{\xi_l} -\frac{\pi}{2|{\Omega}ega|}\sum_{j=1}^N |p_j|^2+\frac{\pi}{|{\Omega}ega|}z \overline{\sum_{j=1}^N p_j} \right)
\epsilonnd{eqnarray*}
is holomorphic in $\tildelde {\Omega}ega$ and satisfies
$$|\mathcal{H}(z)|^2=\left(\prod_{l=1}^m |z-\xi_l|^{-2n_l}\right) \hbox{exp}\left(u_0+8\pi \sum_{l=1}^m (n_l+1) H(z-\xi_l)\right)$$
in view of \epsilonqref{hhh}. For $l=1,\dots,m$ the function
$$\mathcal{H}^l(z)= \mathcal{H}(z) \prod_{l' \not= l} (z-\xi_{l'})^{-(n_{l'}+2)}$$
is holomorphic near $\xi_l$ and satisfies
\begin{eqnarray}
|\mathcal{H}^l(z)|^2=\hbox{exp}\left(4\pi (n_l+2)H(z-\xi_l)+4\pi \sum_{l'\not=l} (n_{l'}+2) G(z, \xi_{l'})-4\pi \sum_j n_j G(z,p_j) \right). \lambdabdabel{1849}
\epsilonnd{eqnarray}
\noindent To be more clear, let us spend few words to compare the case $m=1$ and $m\geq 2$. When $m=1$ notice that $\mathcal{H}$ satisfies $|\mathcal{H}|^2=e^{u_0+8\pi(n+1)H(z)-2n \log |z|}$ in view of \epsilonqref{keyrelationH}. The function $e^{u_0+8\pi(n+1)H(z)-2n \log |z|}$ is a sort of effective potential for \epsilonqref{3} at $0$, where $e^{u_0-2n \log |z|}$ is the non-vanishing part of $e^{u_0}$ and $e^{8\pi(n+1) H(z)}$ is the self-interaction of the concentration point $0$ driven by $PU_{\deltalta,0,\sigmama_0}$ through \epsilonqref{1138}. When $m\geq 2$, \epsilonqref{1849} can be re-written as
$$|\mathcal{H}^l(z)|^2=\hbox{exp} \left(u_0+8\pi(n_l+1)H(z-\xi_l)+8\pi \sum_{l'\not=l} (n_{l'}+1) G(z, \xi_{l'})-2n_l \log |z-\xi_l|\right)$$
for $l=1,\dots,m$, yielding to an effective potential for \epsilonqref{3} at $\xi_l$ exhibiting an additional interaction term $e^{8\pi \sum_{l'\not=l} (n_{l'}+1) G(z, \xi_{l'})}$ generated by the effect of the concentration points $\xi_{l'}$, $l'\not=l$, through \epsilonqref{1139}.
\noindent Setting $\mathcal{H}_{0}=\frac{\mathcal{H}}{(z-\xi_1)^{n_1+2}\dots (z-\xi_m)^{n_m+2}}$, we now define $\sigmama_0$ as
\begin{equation} \lambdabdabel{1143}
\sigmama_0(z)=-\left( \int^z \mathcal{H}_{0}(w) \hbox{exp}\left[-\sum_{l=1}^m c_0^l (w-\xi_l)^{n_l+1} \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}\right] dw \right)^{-1},
\epsilonnd{equation}
where
$$c_0^l=\frac{1}{\mathcal{H}_0(\xi_l) (n_l+1)!} \frac{d^{n_l+1} \mathcal{H}^l }{dz^{n_l+1}}(\xi_l),\quad l=1,\dots,m,$$
guarantee that all the residues of the integrand function in the definition of $\sigmama_0$ vanish. The presence of the term $ \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}$ is crucial to compute explicitly the $c_0^l$'s for
$$c_0^l (w-\xi_l)^{n_l+1} \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}=O((w-\xi_{l'})^{n_{l'}+2})$$
has an high-order effect near any other $\xi_{l'}$, $l' \not= l$. By construction $\sigmama_0 \in \mathcal{M}(\overline{{\Omega}ega})$ vanishes only at the $\xi_l$'s with multiplicity $n_l+1$ and
$$\lim_{z \to \xi_l} \frac{(z-\xi_l)^{n_l+1}}{\sigmama_0(z)}=\frac{\mathcal{H}^l(\xi_l)}{n_l+1},$$
and satisfies
$$|\sigmama_0'(z)|^2= |\sigmama_0(z)|^4 \hbox{exp}\left(u_0+8\pi\sum_{l=1}^m (n_l+1)G(z,\xi_l) -2 \sum_{l=1}^m \re \bigg[c_0^l (z-\xi_l)^{n_l+1} \prod_{l'\not=l}(z-\xi_{l'})^{n_{l'}+2}\bigg] \right).$$
Under the assumptions of Theorem \ref{mainbb}, notice that $c_0^l=0$ for all $l=1,\dots,m$ and
$$\left|\left(1\over\sigmama_0\right)'(z)\right|^2=|\mathcal{H}_0(z)|^2=e^{u_0+8\pi \sum_{l=1}^m (n_l+1) G(z,\xi_l)}.$$
\noindent Since each $\xi_l$ gives a contribution to the dimension of the kernel for the linearized operator \epsilonqref{ol}, the parameters $\deltalta$ and $a$ are no longer enough to recover all the degeneracies induced by the ansatz $PU_{\deltalta,a,\sigmama}$, for $\sigmama \in \mathcal{M}(\overline{{\Omega}ega})$ a function which vanishes only at the points $\xi_l$, $l=1,\dots,m$, with multiplicity $n_l+1$. In our construction, the correct number of parameters to use is $2m+1$, given by $m$ small complex numbers $a_1,\dots,a_m$ and $\deltalta>0$ small, where the latter gives rise to the concentration parameter $\deltalta_l$ at $\xi_l$, $l=1,\dots,m$, by means of \epsilonqref{repla2}. The request that all the $\deltalta_l$'s tend to zero with the same rate
is necessary as we will discuss later.
\noindent We need to construct an ansatz that looks as $PU_{\deltalta_l,a_l,\sigmama_{a,l}}$ near each $\xi_l$, for a suitable $\sigmama_{a,l}$ which makes the approximation near $\xi_l$ good enough. In order to localize our previous construction, let us define $PU_{\deltalta_l,a_l,\sigmama}$ as the solution of
$$\left\{ \begin{array}{ll} -\displaystyleelta PU_{\deltalta_l,a_l,\sigmama} =
\chi(|z-\xi_l|) |\sigmama'(z)|^2 e^{U_{\deltalta_l,a_l,\sigmama}} -\frac{1}{|{\Omega}ega|} \int_{\Omega}ega \chi(|z-\xi_l|) |\sigmama'(z)|^2 e^{U_{\deltalta_l,a_l,\sigmama}}& \hbox{in }{\Omega}ega\\
\int_{\Omega}ega PU_{\deltalta_l,a_l,\sigmama}=0,&
\epsilonnd{array} \right.$$
where $\chi$ is a smooth radial cut-off function so that $\chi=1$ in $[-\epsilonta,\epsilonta]$, $\chi=0$ in $(-\infty,-2\epsilonta]\cup [2\epsilonta,+\infty)$, $0<\epsilonta<\frac{1}{2} \min\{|\xi_l-\xi_{l'}|, \hbox{dist }(\xi_l,\partial {\Omega}ega): l,l'=1,\dots,m,\, l\not= l' \}$. The approximating function is then built as $W=\displaystyle \sum_{l=1}^m PU_l$, where $U_{\deltalta_l,a_l,\sigmama_{a,l}}$ and $PU_{\deltalta_l,a_l,\sigmama_{a,l}}$ will be simply denoted by $U_l$ and $PU_l$.
\noindent Let us now explain how to find the functions $\sigmama_{a,l}$, $l=1,\dots,m$. Setting
$$\mathcal{B}_r^l=\bigg\{ \sigmama \hbox{ holomorphic in }B_{2\epsilonta}(\xi_l):\:\Big\| \frac{\sigmama}{\sigmama_0}-1\Big\|_{\infty,B_{2\epsilonta}(\xi_l)} \leq r \bigg\}$$
for $l=1,\dots,m$, Lemma \ref{gomme} still holds in this context for all $\sigmama \in \mathcal{B}_r^l$, by simply replacing $0$, $n$ with $\xi_l$, $n_l$ and $\tildelde {\Omega}ega$ with $B_{2\epsilonta}(\xi_l)$.
Then, for all $\sigmama=(\sigmama_1,\dots,\sigmama_m) \in \mathcal{B}_r:=\mathcal{B}_r^1 \tildemes \dots \tildemes \mathcal{B}_r^m$ and $a=(a_1,\dots,a_m) \in \mathbb{C}^m$ with $\|a\|_\infty <\rho$ there exist points $a_i^l$, $l=1,\dots,m$ and $i=0,\dots,n_l$, so that $\{z \in B_{2\epsilonta}(\xi_l): \, \sigmama_l(z)=a_l \}=\{\xi_l+a_0^l,\dots,\xi_l+a_{n_l}^l\}$ for all $l=1,\dots,m$. Arguing as for \epsilonqref{Hasigma}, for $l=1,\dots,m$ the function
\begin{eqnarray*}
&& \hspace{-0.3cm} \mathcal{H}_{a,\sigmama}^l(z)= \prod_j (z-p_j)^{n_j} \prod_{l' \not= l} (z-\xi_{l'})^{n_{l'}} \prod_{l' \not= l} \prod_{i=0}^{n_{l'}} (z-\xi_{l'}-a_i^{l'})^{-2} \hbox{exp}\left( 4\pi \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} H^*(z-\xi_{l'}-a_i^{l'}) \right.\\
&&\hspace{-0.3cm} \left. -2\pi\sum_{j=1}^N H^*(z-p_j)+\frac{\pi}{|{\Omega}ega|} \sum_{l'=1}^m (n_{l'}+1) (\xi_{l'}-2z) \overline{ \xi_{l'}}
-\frac{\pi}{2|{\Omega}ega|}\sum_{j=1}^N |p_j|^2-\frac{2\pi}{|{\Omega}ega|}\sum_{l'=1}^m (z-\xi_{l'}) \overline{\sum_{i=0}^{n_{l'}} a_i^{l'}}
+\frac{\pi}{|{\Omega}ega|}z \overline{\sum_{j=1}^N p_j} \right)
\epsilonnd{eqnarray*}
is holomorphic near $\xi_l$ and satisfies
\begin{equation} \lambdabdabel{keyrelationgen}
|\mathcal{H}_{a,\sigmama}^l(z)|^2= |z-\xi_l|^{-2n_l}
\epsilonxp[u_0+8\pi \sum_{i=0}^{n_l} H(z-\xi_l-a_i^l)+8\pi
\sum_{l'\not= l} \sum_{i=0}^{n_{l'}}
G(z,\xi_{l'}+a_i^{l'})-\frac{2\pi}{|{\Omega}ega|} \sum_{l'=1}^m
\sum_{i=0}^{n_{l'}} |a_i^{l'}|^2]
\epsilonnd{equation}
in view of \epsilonqref{hhh}. Setting
$$g_{a_l,\sigmama_l}^l(z)=\frac{\sigmama_l(z)-a_l}{\prod_{i=0}^{n_l}(z-\xi_l-a_i^l)},\quad z \in B_{2\epsilonta}(\xi_l),$$
and
\begin{equation} \lambdabdabel{cagen}
c_{a,\sigmama}^l=\frac{\prod_{l'
\not=l}(\xi_l-\xi_{l'})^{-(n_{l'}+2)}}{(n_l+1)!}\frac{d^{n_l+1}}{dz^{n_l+1}}\left[
\Big(\frac{g^l_{a_l,\sigmama_l}(z)
g^l_{0,\sigmama_l}(\xi_l)}{g^l_{a_l,\sigmama_l}(\xi_l)
g^l_{0,\sigmama_l}(z)}\Big)^2
\frac{\mathcal{H}_{a,\sigmama}^l(z)}{\mathcal{H}_{a,\sigmama}^l(\xi_l)
} \right](\xi_l),
\epsilonnd{equation}
the aim is to find a solution $\sigmama_a =(\sigmama_{a,1},\dots, \sigmama_{a,m})\in \mathcal{B}_r$ of the system $(l=1,\dots,m)$:
\begin{equation} \lambdabdabel{sigmaagen}
\sigmama_l(z)= -\left( \int^z
\Big(\frac{g^l_{a_l,\sigmama_l}(w)}{g^l_{0,\sigmama_l}(w)}\Big)^2
\frac{\mathcal{H}^l_{a,\sigmama}(w)}{(w-\xi_l)^{n_l+2}}
\hbox{exp}\left[-\sum_{l'=1}^m c_{a,\sigmama}^{l'}
(w-\xi_{l'})^{n_{l'}+1} \prod_{l''
\not=l'}(w-\xi_{l''})^{n_{l''}+2}\right] dw\right)^{-1},
\epsilonnd{equation}
where the definition of $c_{a,\sigmama}^l$ makes null the residue at $\xi_l$ of the integrand function in \epsilonqref{sigmaagen}. The function $\sigmama_{a,l}$ will vanish only at $\xi_l$ with multiplicity $n_l+1$ and satisfy
\begin{eqnarray} \lambdabdabel{eq sigmaagen}
|\sigmama_{a,l}'(z)|^2&=& |\sigmama_{a,l}(z)-a_l|^4 \hbox{exp}\left(u_0+8\pi \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} G(z,\xi_{l'}+a_i^{l'})-\frac{2\pi}{|{\Omega}ega|} \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} |a_i^{l'}|^2\right.\\
&&\left. -2\sum_{l'=1}^m \re \Big[c^{l'}_{a,\sigmama_{a}}
(z-\xi_{l'})^{n_{l'}+1}
\prod_{l''\not=l'}(z-\xi_{l''})^{n_{l''}+2}\Big] \right) \nonumber
\epsilonnd{eqnarray}
in view of \epsilonqref{keyrelationgen}.
\noindent Since $\mathcal{H}_{0,\sigmama}^l=\mathcal{H}^l$ and $c^l_{0,\sigmama}=c_0^l$ for all $l=1,\dots,m$, when $a=0$ the system \epsilonqref{sigmaagen} reduces to $m$-copies of \epsilonqref{1143} in each $B_{2\epsilonta}(\xi_l)$, $l=1,\dots,m$, and it is natural to find $\sigmama_a$ branching off $(\sigmama_0,\dots,\sigmama_0)$ for $a$ small by IFT. Let us emphasize that each $\sigmama_{a,l}$, $l=1,\dots,m$, is close to $\sigmama_0\Big|_{B_{2\epsilonta}(\xi_l)}$, a crucial property to have $D_0$ defined in terms of a unique $\sigmama_0$ (see \epsilonqref{ggg}). Letting $q_{0,l}$ be the function so that $\sigmama_0=q_{0,l}^{n_l+1}$ near $\xi_l$, arguing as in Lemma \ref{derivca} we have that
\begin{lem}\lambdabdabel{derivcagen}
Up to take $\rho$ smaller, there exists a $C^1-$map $a \in
B_\rho(0) \to \sigmama_a \in \mathcal{B}_r$ so that $\sigmama_a$ solves the system \epsilonqref{cagen}-\epsilonqref{sigmaagen}.
Moreover, the map $a \in B_\rho(0) \to c_a^l:=c^l_{a,\sigmama_a}$ is $C^1$
with
\begin{eqnarray}
&& \Gamma^{ll}:=\mathcal{H}(\xi_l) \partial_{a_l} c_a^l
\Big|_{a=0}=\frac{1}{n_l !}\frac{d^{n_l+1}}{dz^{n_l+1}}\bigg[
\mathcal{H}^l(z)f_{n_l+1}^l(z)\bigg] (\xi_l) \lambdabdabel{primo}\\
&&\Upsilon^{ll}:=\mathcal{H}(\xi_l) \partial_{\bar a_l} c_a^l \Big|_{a=0}=-{2\pi(n_l+1)\over |{\Omega}|
n_l!}\overline{b_{n_l+1}^l}\,\frac{d^{n_l} \mathcal{H}^l }{dz^{n_l}}(\xi_l) \lambdabdabel{secondo}
\epsilonnd{eqnarray}
and for $j \not= l$
\begin{eqnarray}
&&\Gamma^{lj}:=\mathcal{H}(\xi_l) \partial_{a_j} c_a^l
\Big|_{a=0}=\frac{n_j+1}{(n_l+1)!}\frac{d^{n_l+1}}{dz^{n_l+1}}\bigg[
\mathcal{H}^l(z) \tilde f_{n_j+1}^j(z) \bigg] (\xi_l) \lambdabdabel{terzo}\\
&&\Upsilon^{lj}:=\mathcal{H}(\xi_l) \partial_{\bar a_j} c_a^l \Big|_{a=0}=-{2\pi(n_j+1)\over |{\Omega}| n_l!}\overline{b_{n_j+1}^j}\,\frac{d^{n_l} \mathcal{H}^l}{dz^{n_l}}(\xi_l) \lambdabdabel{quarto},
\epsilonnd{eqnarray}
where
$$f_{n+1}^l(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}}\left[2\log \frac{w-q_{0,l}(z)}{q_{0,l}^{-1}(w)-z}+4\pi
H^*(z-q_{0,l}^{-1}(w))\right] (0)\:,\qquad b_{n+1}^l=\frac{1}{(n+1)!}\frac{d^{n+1} q_{0,l}^{-1}}{dw^{n+1}}(0)$$
and for $j \not=l$
$$\tildelde f_{n+1}^j(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}} \bigg[-2\log(z-q_{0,j}^{-1}(w))+ 4\pi H^*(z-q_{0,j}^{-1}(w))\bigg](0).$$
\epsilonnd{lem}
\noindent Letting $n=\min\{n_l :\: l=1,\dots,m\}$, up to re-ordering, assume that $n=n_1=\dots=n_{m'}<n_l$ for all $l=m'+1,\dots,m$, where $1\leq m'\leq m$. The matrix $A$ in Theorem \ref{mainbb} is the $2m \tildemes 2m-$matrix in the form
\begin{equation} \lambdabdabel{matrixA}
A=\left( \begin{array}{ccc} A_{1,2}^{1,2}& \dots & A_{1,2}^{2m-1,2m}\\
\vdots& \vdots &\vdots\\
A_{2m-1,2m}^{1,2}& \dots& A_{2m-1,2m}^{2m-1,2m} \epsilonnd{array} \right),
\epsilonnd{equation}
where the $2\tildemes 2$-blocks are given by
$$A_{2l-1,2l}^{2l'-1,2l'}=\left(\begin{array}{cc} \re [\Gamma^{ll'}+\Upsilon^{ll'}+\frac{n(2n+3)}{n+1} D_0 \frac{|\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}}{\sum_{j=1}^{m'}|\mathcal{H}^j(\xi_j)|^{-\frac{2}{n+1}}} \deltalta_{ll'}]& \im [\Upsilon^{ll'}-\Gamma^{ll'}]\\
\im [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Gamma^{ll'}-\Upsilon^{ll'}-\frac{n(2n+3)}{n+1} D_0 \frac{|\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}}{ \sum_{j=1}^{m'}|\mathcal{H}^j(\xi_j)|^{-\frac{2}{n+1}}} \deltalta_{ll'}] \epsilonnd{array}\right)$$
when $l=1,\dots,m'$ and by
$$A_{2l-1,2l}^{2l'-1,2l'}=\left(\begin{array}{cc} \re [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Upsilon^{ll'}-\Gamma^{ll'}]\\
\im [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Gamma^{ll'}-\Upsilon^{ll'}] \epsilonnd{array}\right)$$
when $l=m'+1,\dots,m$, with $\Gamma^{ll'}$ and $\Upsilon^{ll'}$ given by \epsilonqref{primo}, \epsilonqref{terzo} and \epsilonqref{secondo}, \epsilonqref{quarto}, respectively, and $\deltalta_{ll'}$ the Kronecker's symbol.
\noindent Arguing as in Lemma \ref{expPU}, for $l=1,\dots,m$ we have that
\begin{eqnarray*}
PU_{\deltalta_l,a_l,\sigmama_l}&=&\chi(|z-\xi_l|) \left[U_{\deltalta_l,a_l,\sigmama_l}-\log (8 \deltalta^2_l)+4 \log |g_{a_l,\sigmama_l}^l| \right]\\
&&+8\pi \sum_{i=0}^{n_l} \left[ \frac{1}{2\pi} (\chi(|z-\xi_l|)-1) \log |z-\xi_l-a_i^l|+H(z-\xi_l-a_i^l)\right]+\Theta_{\deltalta_l,a_l,\sigmama_l}+2\deltalta^2_l f_{a_l,\sigmama_l}+O(\delta^4_l)
\epsilonnd{eqnarray*}
and
\begin{eqnarray} \lambdabdabel{1139}
PU_{\deltalta_l,a_l,\sigmama_l}=8\pi \sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)+\Theta_{\deltalta_l,a_l,\sigmama_l}+2\deltalta^2_l \left( f_{a_l,\sigmama_l}-{\chi(|z-\xi_l|)\over|\sigmama_l(z)-a_l|^2}\right)+O(\deltalta^4_l)
\epsilonnd{eqnarray}
do hold in $C(\overline{{\Omega}ega})$ and $C_{\text{loc}}(\overline{{\Omega}ega} \setminus\{\xi_l\})$, respectively, uniformly for $|a|< \rho$ and $\sigmama_l \in \mathcal{B}_r^l$, where
$$\Theta_{\delta_l,a_l,\sigmama_l}=-\frac{1}{|{\Omega}ega|}\int_{\Omega} \chi(|z-\xi_l|) \log {|\sigmama_l(z)-a_l|^4\over
(\delta_l^2+|\sigmama_l(z)-a_l|^2)^2}$$ and $f_{a_l,\sigmama_l}$ is a smooth function in $z$ (with a uniform control in $a_l$ and $\sigmama_l$ of it and its derivatives in $z$).
Choosing $\sigmama_l=\sigmama_{a,l}$ and summing up over $l=1,\dots,m$, by \epsilonqref{eq sigmaagen} for our approximating function there hold
\begin{eqnarray}\lambdabdabel{ieagr}
W&=& U_{\deltalta_l,a_l,\sigmama_l}-\log (8 \deltalta^2_l)+\log |\sigmama_l'|^2
-u_0+\frac{2\pi}{|{\Omega}ega|} \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} |a_i^{l'}|^2+\Theta^l(a,\deltalta) \\
&&+2 \re \Big[c^l_{a,\sigmama_l} (z-\xi_l)^{n_l+1}
\prod_{l'\not=l}(z-\xi_{l'})^{n_{l'}+2}\Big]
+O(|z-\xi_l|^{n_l+2}\sum_{l'\ne l}|c^{l'}_{a,\sigmama_{l'}}|)+\sum_{l'=1}^m O(\deltalta^2_{l'}
|z-\xi_l|+\delta^4_{l'})\nonumber
\epsilonnd{eqnarray}
and
$$W= 8\pi \sum_{l=1}^m \sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)+O\bigg(
\sum_{l'=1}^m \deltalta^2_{l'} \log|\deltalta_{l'}|\bigg)$$
uniformly in $B_\epsilonta(\xi_l)$ and in ${\Omega}ega \setminus \cup_{l=1}^m B_\epsilonta(\xi_l)$, respectively, where
$$\Theta^l(a,\deltalta):=\sum_{l'=1}^m [\Theta_{\deltalta_{l'},a_{l'},\sigmama_{l'}}+\deltalta^2_{l'} f_{a_{l'},\sigmama_{l'}}(\xi_l)].$$
As a consequence, we have that
$$\int_{\Omega}ega e^{u_0+W}= \sum_{l'=1}^m \left[\int_{B_\rho(0)} \frac{n_{l'}+1}{(\deltalta_{l'}^2+|y-a_{l'}|^2)^2} +o\Big(\frac{1}{\deltalta_{l'}^2}\Big)\right]= \pi \sum_{l'=1}^m \frac{n_{l'}+1}{\deltalta_{l'}^2} [1+o(1)],$$
and then near $\xi_l$ there holds
$$4\pi N \frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}=4\pi N \frac{|\sigmama_l'|^2 e^{U_{\deltalta_l,a_l,\sigmama_l}+O(|z-\xi_l|^{n_l+1})+o(1)}}{8\pi \sum_{l'=1}^m (n_{l'}+1) \deltalta_l^2 \deltalta_{l'}^{-2}(1+o(1))}.$$
In order to construct a $N-$condensate $(\mathcal{A}_\epsilon,\phi_\epsilon)$ which satisfies
\epsilonqref{magconc} as $\epsilon \to 0$, we look for a solution $w_\epsilon$ of \epsilonqref{3} in the form $w_\epsilon=\displaystyle \sum_{l=1}^m PU_{\deltalta_l,a_l,\sigmama_l}+\phi$, where $\phi$ is a small remainder term and $\deltalta_l=\deltalta_l(\epsilon)$, $a_l=a_l(\epsilon)$ are suitable small parameters, so that
\begin{eqnarray*} &&4\pi N \frac{e^{u_0+w_\epsilon}}{\int_{\Omega}ega e^{u_0+w_\epsilon}}+
\frac{64 \pi^2N^2 \epsilonpsilon^2 \int_{\Omega}ega
e^{2u_0+2w_\epsilon}}{(\int_{\Omega}ega e^{u_0+w_\epsilon}+\sqrt{(\int_{\Omega}ega
e^{u_0+w_\epsilon})^2-16\pi N\epsilonpsilon^2\int_{\Omega}ega
e^{2u_0+2w_\epsilon}})^2}\left(\frac{e^{u_0+w_\epsilon}}{\int_{\Omega}ega
e^{u_0+w_\epsilon}} -\frac{e^{2u_0+2w_\epsilon}}{\int_{\Omega}ega
e^{2u_0+2w_\epsilon}}\right) \\
&&\hspace{3cm} \rightharpoonup 8\pi \sum_{l=1}^m (n_l+1) \deltalta_{\xi_l}
\epsilonnd{eqnarray*}
in the sense of measures as $\epsilonpsilon \to 0$. Since $|\sigmama_l'|^2 e^{U_{\deltalta_l,a_l,\sigmama_l}} \rightharpoonup 8\pi (n_l+1) \deltalta_{\xi_l}$ as $\deltalta_l,a_l \to 0$, to have the correct concentration property we need that
$$8\pi \sum_{l'=1}^m (n_{l'}+1) \deltalta_l^2 \deltalta_{l'}^{-2} \to 4\pi N$$
for all $l=1,\dots,m$, and then $\frac{\deltalta_l}{\deltalta_{l'}} \to 1$ for all $l,l'=1,\dots,m$ in view of \epsilonqref{hhh}. It is then natural to introduce just one parameter $\deltalta$ and to chose the $\deltalta_l$'s as
\begin{equation}\lambdabdabel{repla2}
\delta_l=\delta \qquad l=1,\dots,m.
\epsilonnd{equation}
\noindent We restrict our attention to the case $c_0^l=0$ for all $l=1,\dots,m$, which is necessary in our context and is simply a re-formulation of the assumption that $\mathcal{H}_0$ has zero residues at $p_1,\dots,p_m$. As in Theorem \ref{main}, we will work in the parameter's range:
$$a_l=o(\deltalta),\qquad \deltalta \sim \epsilon^{n+1\over n+2}$$
as $\epsilon\to 0^+$. Since then
$$K^{-1} \le \frac{\delta^2+|z-\xi_l|^{2n_l+2}}{\delta^2+\big|\sigmama_l(z)
-a_l|^2} \le K, \qquad K^{-1}|z-\xi_l|^{2n_l}\leq |\sigmama_l'(z)|^2 \leq K |z-\xi_l|^{2n_l}$$
in $B_{2\epsilonta}(\xi_l)$ for all $\sigmama_l \in \mathcal{B}_r^l$ and $l=1,\dots,m$, where $K>1$, the norm \epsilonqref{wn} can be now simply defined as
$$\| h \|_*=\sup_{z\in {\Omega}}\left[ \sum_{l=1}^m\frac{\deltalta^{\gammama}
\left(|z-\xi_l|^{2n_l}+\deltalta^{\frac{2n_l}{n_l+1}}\right)}{(\deltalta^2+|z-\xi_l|^{2n_l+2})^{1+\frac{\gammama}{2}}}\right]^{-1}\;
|h(z)|$$
for any $h\in L^\infty({\Omega})$, where $0<\gammama<1$ is a small
fixed constant. In order to simplify notations, we set $U_l=U_{\delta_l,a_l,\sigmama_l}$, $c_a^l=c_{a,\sigmama_l}^l$, $\Theta_l=\Theta_{\delta_l,a_l,\sigmama_l}$ and $f_l=f_{a_l,\sigmama_l}$. We have that
\begin{lem}\lambdabdabel{estrrm}
There exists a constant $C>0$ independent of $\delta$ such that
\begin{equation}\lambdabdabel{erem}
\|R\|_*\le C\delta^{2-\gammama}.
\epsilonnd{equation}
\epsilonnd{lem}
\begin{proof}
We shall sketch the proof of \epsilonqref{erem}, by following ideas used in the proof of Theorem \ref{estrr01550}. Through the change of variable $y=\sigmama_l(z)$ in $\sigmama_l^{-1}(B_\rho(0))$, by Lemma \ref{derivcagen}, \epsilonqref{ieagr}, \epsilonqref{repla2} and $c_0^l=0$ for all $l=1,\dots,m$ we find that
\begin{eqnarray*}
&&{8\delta^2\over e^{{2\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=0}^{n_{l'}}|a_i^{l'}|^2+\Theta^l(a,\delta)}}\int_{\sigmama^{-1}_l(B_{\rho}(0))} e^{u_0+W}
=\int_{\sigmama^{-1}_l(B_{\rho}(0))} |\sigmama_l'|^{2}
e^{U_{l}+O(|z-\xi_l|^{n_l+1} \sum_{l'=1}^m |c_{a}^{l'}|+\deltalta^2|z-\xi_l|+\deltalta^4)}\\
&&=8\pi(n_l+1)
- \int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{8(n_l+1) \deltalta^2}{|y|^4}+ O\Big(\|a\|^2+\delta\|a\|+\delta^{2n_l+3\over n_l+1}\Big),
\epsilonnd{eqnarray*}
where $\|a\|^2=\displaystyle \sum_{l=1}^m|a_l|^2$. Setting ${\Omega}_\rho=\cup_{l=1}^m \sigmama_l^{-1}(B_\rho(0))$ we get that
\begin{eqnarray*}
&&
{8\delta^2\over e^{{2\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=0}^{n_{l'}}|a_i^{l'}|^2+\sum_{l'=1}^m\Theta_{l'}}}\,\int_{\Omega}ega e^{u_0+W} = \sum_{l=1}^m e^{\delta^2\sum_{l'=1}^mf_{l'}(\xi_l)}\bigg[8\pi(n_l+1)-\int_{\mathbb{R}^2 \setminus B_{\rho}(0)}
\frac{8(n_l+1) \deltalta^2}{|y|^4}\\
&&+O(\|a\|^2+\delta\|a\|+\delta^{2n_l+3\over n_l+1}\Big)\bigg]+8\deltalta^2 \int_{{\Omega}ega \setminus{\Omega}_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=0}^{n_l} G(z,\xi_l+a_{i}^l)}+O(\delta^4|\log \deltalta| +\deltalta^2\|a\|^{\frac{2}{\max_l n_l+1}})\\
&&=\sum_{l=1}^m\bigg[8\pi(n_l+1)+8\pi(n_l+1)\delta^2\sum_{l'=1}^m f_{l'}(\xi_l)-8(n_l+1)\delta^2\int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{1}{|y|^4}\bigg]\\
&&+8\deltalta^2 \int_{{\Omega}ega \setminus {\Omega}_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)}+o(\delta^2)=\;4\pi N\left[1+{2\over N}\delta^2D_a+{2\over
N}\delta^2\sum_{l,l'=1}^m(n_l+1)f_{l'}(\xi_l)+o(\delta^2)\right]
\epsilonnd{eqnarray*}
in view of \epsilonqref{hhh}, where $D_a$ is given by
$$\pi D_a=\int_{{\Omega}ega \setminus {\Omega}ega_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=1}^{n_l}G(z,\xi_l+a_i^l)} -
\sum_{l=1}^m (n_l+1)\int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{1}{|y|^4}.$$
Hence, for $|z-\xi_l| \leq \epsilonta$ we have that
\begin{eqnarray}\lambdabdabel{impm}
&&\displaystyleelta W+4\pi N\left( \frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}-\frac{1}{|{\Omega}ega|}\right)=|\sigmama_l'|^2 e^{U_l} \bigg[2\hbox{Re }\Big[c_a^l(z-\xi_l)^{n_l+1}\prod_{l' \ne l}(z-\xi_{l'})^{n_{l'}+2}\Big]\\
&&+\delta^2\sum_{l'=1}^m f_{l'}(\xi_l)-{2D_a\over N}\deltalta^2-{2\delta^2\over N}\sum_{j,l'=1}^m(n_j+1)
f_{l'}(\xi_j)+O(\|a\| |z-\xi_l|^{n_l+2}+\delta^2|z-\xi_l|)+o(\deltalta^2)\bigg]
+O(\deltalta^2)\nonumber
\epsilonnd{eqnarray}
as $\delta \to 0$, in view of \epsilonqref{hhh} and $\int_{\Omega}ega \chi_l
|\sigmama_l'|^{2} e^{U_{l}}=8\pi(n_l+1)+O(\deltalta^2)$ for all $l=1,\dots,m$. For $z \in
{\Omega}ega \setminus \cup_{l=1}^mB_\epsilonta(\xi_l)$, we have that
\begin{eqnarray}
\displaystyleelta W+4\pi N\left( \frac{e^{u_0+W} }{\int_{\Omega}ega e^{u_0+W}
}-\frac{1}{|{\Omega}ega|}\right)=O(\deltalta^2). \lambdabdabel{impextm}
\epsilonnd{eqnarray}
On the other hand, arguing as in \epsilonqref{const1}, we have that
\begin{eqnarray*}
{64\deltalta^{4}\over
e^{{4\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=1}^{n_{l'}}|a_i^{l'}|^2+2\sum_{l'=1}^m\Theta_{l'}}}\int_{\Omega}ega
e^{2u_0+2W}
=64\sum_{l=1}^{m'}{(n+1)^3\over |\alphaha_{a,l}|^{{2\over
n+1}}\delta^{{2\over n+1}} }\int_{\mathbb{R}^2}
\frac{|y+a_l \deltalta^{-1} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4}
+O(\delta^{-\frac{1}{n+1}}),
\epsilonnd{eqnarray*}
where $\ds\alphaha_{a,l}=\lim_{z\to \xi_l}{(z-\xi_l)^{n_l+1}\over \sigmama_l(z)}$.
Recall that $n=\min\{n_l: l=1,\dots,m\}=n_1=\dots=n_{m'}<n_l$ for all $l=m'+1,\dots,m$. Setting
$$\ds\tilde
D_{a,\delta}=\sum_{l=1}^{m'}{(n+1)^3\over|\alphaha_{a,l}|^{{2\over
n+1}}\delta^{{2\over n+1}}}\int_{\mathbb{R}^2}
\frac{|y+a_l\deltalta^{-1} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4}\,dy,$$
we have that
$$\frac{4\pi N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}=64\epsilon^2\tilde D_{a,\delta}
+o(\epsilon^2\delta^{-\frac{2}{n+1}}),$$
and there hold
\begin{equation}\lambdabdabel{eps4m}
\frac{4\pi
N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}\left(\frac{e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_{\Omega}ega
e^{2u_0+2W}}\right)=|\sigmama_l'|^{2}e^{U_l}\bigg[{16\epsilon^2\over
\pi N}\tilde
D_{a,\delta}-\epsilon^2|\sigmama_l'|^{2}e^{U_l}+o(\epsilon^2\delta^{-2\over
n+1})\bigg] \epsilonnd{equation}
in $B_\epsilonta(\xi_l)$, $l=1,\dots,m$, and
\begin{equation}\lambdabdabel{eps5m}
\frac{4\pi
N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}\left(\frac{e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_{\Omega}ega
e^{2u_0+2W}}\right)=O(\epsilonpsilon^2 \deltalta^{\frac{2n}{n+1}})
\epsilonnd{equation}
in ${\Omega}ega \setminus \cup_{l=1}^m B_\epsilonta(\xi_l)$. Therefore, we conclude that
$\|R\|_*=O(\delta^{2-\gammama}+\|a\|^2+\epsilon^2\delta^{-{2\over n+1}})$ and
\epsilonqref{erem} follows. \qed
\epsilonnd{proof}
\noindent As mentioned in section $4$, when we look for a solution of \epsilonqref{3} in the
form $w=W+\phi$, we are led to study \epsilonqref{ephi}. In order to state the invertibility of the linear
operator $L$ in a suitable functional setting, for $l=1,\dots,m$ let us introduce
the functions:
$$Z_{0l}(z)=\frac{\deltalta^2-|\sigmama_l(z)-a_l|^2}{\deltalta^2+|\sigmama_l(z)-a_l|^2},\quad Z_l(z)=
\frac{\deltalta(\sigmama_l(z)-a_l)}{\deltalta^2+|\sigmama_l(z)-a_l|^2}\qquad z\in B_{2\epsilonta}(\xi_l).$$
Also, let $PZ_{0l}$ and $PZ_l$ be the unique solutions with zero average of
$$\displaystyleelta PZ_{0l} =\chi_l \displaystyleelta
Z_{0l}-\frac{1}{|{\Omega}|}\int_{\Omega} \chi_l \displaystyleelta Z_{0l},\qquad \displaystyleelta
PZ_l =\chi_l \displaystyleelta Z_l-\frac{1}{|{\Omega}|}\int_{\Omega} \chi_l \displaystyleelta Z_l$$
where $\chi_l(z):=\chi(|z-\xi_l|)$, and set $PZ_0=\displaystyle \sum_{l=1}^m
PZ_{0l}$. As in Propositions \ref{prop4.1}-\ref{nlp}, it is possible to prove:
\begin{prop} Let $M_0>0$. There exists $\epsilonta_0>0$ small such that for any $0<\delta\leq\epsilonta_0$,
$|\log\delta|^2\epsilon^2\leq \epsilonta_0 \delta^{2\over n+1}$ and $\|a\|\leq M_0 \delta$ there is a unique solution
$\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$
and $d_l=d_l(\delta,a)\in{\mathbb C}$, $l=1,\dots,m$, to
$$\left\{\begin{array}{ll}
L(\phi) =-[R+N(\phi)] + d_0 \displaystyleelta PZ_{0}+\displaystyle \sum_{l=1}^m\re[d_l \lambdabdap PZ_l] &\text{in }{\Omega}\\
\int_{\Omega}\phi=\int_{{\Omega}ega } \phi \displaystyleelta PZ_l= 0&l=0,\dots,m.
\epsilonnd{array} \right. $$
Moreover, the map $(\delta,a)\mapsto \phi(\delta,a)$ is $C^1$ with
\begin{equation}\lambdabdabel{estphim}
\|\phi\|_\infty\le C \deltalta^{2-\sigmama}|\log \deltalta|.
\epsilonnd{equation}
\epsilonnd{prop}
\noindent The function $W+\phi$ is a solution of (\ref{3}) if we adjust $\deltalta$ and $a$ so to have
$d_l(\delta,a)=0$ for all $l=0,1,\dots,m$. Similarly to Lemma \ref{1039}, we have that
\begin{lem}
There exists $\epsilonta_0>0$ such that if $0<\delta\leq \epsilonta_0$, $\|a\|\leq \epsilonta_0 \deltalta$ and
\begin{equation} \lambdabdabel{solvem}
\int_{\Omega}ega (L(\phi)+N(\phi)+R) PZ_l=0
\epsilonnd{equation}
does hold for all $l=0,\dots,m$, then $W+\phi$ is a solution of \epsilonqref{3}, i.e.
$d_l(\delta,a)=0$ for all $l=0,\dots,m$.
\epsilonnd{lem}
\noindent
Since there hold the expansions
\begin{equation*}
PZ_{0}=\sum_{l=1}^m\bigg[\chi_l(Z_{0l}+1)-{1\over|{\Omega}|}\int_{\Omega}
\chi_l(Z_{0l}+1)\bigg]+O(\delta^2)
\:,\quad PZ_l=\chi_l Z_l-{1\over|{\Omega}|}\int_{\Omega} \chi_l Z_l+O(\delta)\:\:l=1,\dots,m \epsilonnd{equation*}
in $C(\bar {\Omega})$, arguing as in Proposition \ref{1219}, by \epsilonqref{hhh} and \epsilonqref{impm}-\epsilonqref{estphim} we can deduce the following expansion for \epsilonqref{solvem}:
\begin{lem}
Assume $c_0^l=0$ for all $l=1,\dots,m$ and $\|a\|\leq \epsilonta_0 \delta$. The following
expansions do hold as $\epsilonpsilon \to 0$
\begin{eqnarray*}
\int_{\Omega}ega (L(\phi)+N(\phi)+R) PZ_0&=& -8\pi D_0 \delta^2
+64(n+1)^{\frac{3n+5}{n+1}} \epsilon^2 \delta^{-{2 \over n+1}}
\sum_{l=1}^{m'} |\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}
\int_{\mathbb{R}^2}
\frac{(|y|^2-1)|y+\frac{a_l}{\deltalta}|^{\frac{2n}{n+1}}}{(1+|y|^2)^5} dy\\
&&+ o(\delta^2+\epsilon^2\delta^{-{1\over n+1}})+O(\epsilonpsilon^4
\deltalta^{-\frac{2}{n+1}}|\log \deltalta|^2+\epsilon^8 \deltalta^{-\frac{4}{n+1}}|\log \deltalta|^2 )\epsilonnd{eqnarray*}
and
\begin{eqnarray*}
\int_{\Omega}ega (R+L(\phi)+N(\phi)) PZ_l &=& 4 \pi \deltalta
\sum_{l'=1}^m (\overline{\Upsilon^{ll'}} a_{l'}+ \overline{\Gamma^{ll'}} \bar
a_{l'})-64 (n+1)^{\frac{3n+5}{n+1}} \epsilon^2 \delta^{-{2\over n+1}} |\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}
\chi_M(l) \int_{\mathbb{R}^2}
\frac{|y+\frac{a_l}{\deltalta}|^{\frac{2n}{n+1}}y}{(1+|y|^2)^5} dy \\
&&+o(\delta^2+\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}})+O(\epsilonpsilon^4
\deltalta^{-\frac{2}{n+1}}|\log \deltalta|^2+\epsilon^8 \deltalta^{-\frac{4}{n+1}}|\log \deltalta|^2 ),
\epsilonnd{eqnarray*}
where $D_0$ is defined in \epsilonqref{ggg} and $\chi_M$ is the
characteristic function of the set $M=\{1,\dots,m'\}$.
\epsilonnd{lem}
\noindent Finally, arguing as in the proof of Theorem \ref{main}, we can establish Theorem \ref{mainbb} thanks to $D_0<0$ and the invertibility of the matrix $A$.
\noindent Let us now discuss some examples with $m\geq 2$. As already explained at the beginning of section \ref{examples}, we can consider the case $\xi_1,\dots,\xi_m \in {\Omega}ega$ and $p_j \in \bar {\Omega}ega$ for all $j$. In general, it is very difficult to establish the sign of $D_0$ as required in \epsilonqref{ggg}. The key idea is to start from a configuration of the vortex points $\{p_1,\dots,p_N\}$ which is obtained in a periodic way by a simpler configuration having just one concentration point. In this case, \epsilonqref{ggg} easily follows but Theorem \ref{mainbb} is not really needed. One can use Theorem \ref{main} to obtain a solution with such a simpler configuration and then repeat it periodically. We then slightly move some of the vortex points in order to:
\begin{itemize}
\item keep zero residue of the corresponding $\mathcal{H}_0$ at each concentration point;
\item break down the periodicity of the configuration.
\epsilonnd{itemize}
In this way, assumption \epsilonqref{ggg} is still valid but Theorem \ref{main} is no longer applicable in the trivial way we explained above. We now really need to resort to Theorem \ref{mainbb}. To exhibit some concrete examples, let us focus for simplicity on the case $m=2$ but the general situation can be dealt in the same way. Let ${\Omega}ega$ be a rectangle generated by $\omega_1=a$ and $\omega_2=ib$, $a,b>0$, and let $p_1,p_2,p_3$ be the three half-periods. Assume that the vortex set is $\{-\frac{p_1}{2}, \frac{p_1}{2},0,p_1,p_2,p_3\}$, and the concentration points are $\xi_1=-\frac{p_1}{2}$, $\xi_2=\frac{p_1}{2}$ with multiplicity $n$. Supposing that $0$, $p_1$ have even multiplicity $n_1$ and $p_2,p_3$ have even multiplicity $n_2$ with $n_1+n_2=n+2$, we have that such a configuration is not only $\omega_1=2p_1$ periodic but also $p_1$ periodic: it can be tought as a double repetition (in a $p_1$-periodic way) of the vortex configuration $\{-\frac{p_1}{2}, 0,p_2\}$ in ${\Omega}ega_-:=[-\frac{a}{2},0]\tildemes [-\frac{b}{2},\frac{b}{2}]$ with corresponding multiplicities $n$, $n_1$ and $n_2$. If $n$ is even, it is easy to see that $\frac{d^{n+1} \mathcal{H}^i}{d z^{n+1}}(\xi_i)=0$ for $i=1,2$ since the given vortex configuration is even with respect to $\xi_1$ and $\xi_2$. Notice that this is still true if we replace $0$ and $p_1$ by $-it$ and $p_1+it$, respectively, for $t \in \mathbb{R}$, provided they keep the same multiplicity $n_1$. Arguing as in \epsilonqref{1846}, notice that $D_0$ can be written as
$$\pi D_0=\hbox{Area } \left[ \frac{1}{\sigmama_0}\left({\Omega}ega_- \setminus \sigmama_0^{-1} (B_\rho(0)) \right) \right]+\hbox{Area } \left[ \frac{1}{\sigmama_0}\left({\Omega}ega_+ \setminus \sigmama_0^{-1} (B_\rho(0)) \right) \right] - 2(n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right),$$
where ${\Omega}ega_+:=[0,\frac{a}{2}]\tildemes [-\frac{b}{2},\frac{b}{2}]$. Since
$$u_0+8\pi(n+1)G(z,\xi_1)+8\pi(n+1)G(z,\xi_2)=-4\pi n_1 \tildelde G(z,0)-4\pi n_2 \tildelde G(z,p_2)+4\pi(n+2) \tildelde G(z,\xi_1)$$
in ${\Omega}ega_-$, where $\tildelde G(z,p)$ is the Green function in the torus ${\Omega}ega_-$ with pole at $p$, the function $\mathcal{H}_0$ can be expressed as in \epsilonqref{explicitH} in terms of the Weierstrass function of ${\Omega}ega_+$ and the points $-\frac{p_1}{2}$, $0$ and $p_2$. Arguing exactly as in section \ref{examples}, we have that
$$\hbox{Area } \left[ \frac{1}{\sigmama_0}\left({\Omega}ega_- \setminus \sigmama_0^{-1} (B_\rho(0)) \right) \right]-(n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right)<0$$
provided the multiplicity $n_2$ for the corner of ${\Omega}ega_-$ is so that $\frac{n_2}{2}$ is odd. Arguing similarly in ${\Omega}ega_+$, we get that $D_0<0$ as soon as $\frac{n_2}{2}$ is an odd number. The example then follows by replacing $0$, $p_1$ with $-it$, $p_1+it$ with $t$ small for the corresponding $D_{0,t} \to D_0$ as $t \to 0$.
\begin{appendices}
\section{\hspace{-0.5cm}: The construction of $\sigmama_a$}
Letting $\sigmama_0$ be the solution of \epsilonqref{eq sigma0} of the form \epsilonqref{sigma0}, where $c_0$ is given by \epsilonqref{c0}, we have that $Q_0(z)=\frac{\sigmama_0(z)}{z^{n+1}}$ is an holomorphic function near $z=0$ so that $Q_0(0)=\frac{n+1}{\mathcal{H}(0)}$ (see \epsilonqref{0942}). Since $Q_0(0)\not=0$, the $(n+1)-$root $Q_0^{\frac{1}{n+1}}$ of $Q_0$ is a well-defined holomorhpic function locally at $z=0$, and it makes sense to define $q_0(z)=z Q_0^{\frac{1}{n+1}}(z)$ near $z=0$.
\noindent For $\sigmama \in \mathcal{B}_r$, where $\mathcal{B}_r$ is given in \epsilonqref{setB}, in a similar way we have that $Q(z)=\frac{\sigmama(z)}{z^{n+1}}$ is an holomorphic function near $z=0$ with $|\frac{Q(z)}{Q_0(z)}-1| \leq r$ for all $z$. Since in particular $|Q(z)-\frac{n+1}{\mathcal{H}(0)}|\leq r|Q_0(z)|+|Q_0(z)-\frac{n+1}{\mathcal{H}(0)}|$, we can find $r$ and $\epsilonta>0$ small so that $q(z)=z Q^{\frac{1}{n+1}}(z)$ is a well-defined holomorphic function in $B_{3\epsilonta}(0)$ for all $\sigmama \in \mathcal{B}_r$, with $\sigmama(z)=q^{n+1}(z)$ for all $z \in B_{3\epsilonta}(0)$. Since $q'(0)=Q^{\frac{1}{n+1}}(0)$ satisfies $|q'(0)|\geq [\frac{(1-r)(n+1)}{|\mathcal{H}(0)|}]^{\frac{1}{n+1}}>0$, then $q$ is locally bi-holomorphic at $0$. In order to have uniform invertibility of $q$ for all $\sigmama \in \mathcal{B}_r$, let us evaluate the following quantity:
\begin{eqnarray*}
|1-\frac{q'(z)}{q'(0)}|&\leq& \frac{\sup_{B_{\epsilonta}(0)}|q''|}{|q'(0)|} |z|\leq
\frac{2}{\epsilonta^2}[\frac{(1-r)(n+1)}{|\mathcal{H}(0)|}]^{-\frac{1}{n+1}} \left(\sup_{B_{2\epsilonta}(0)}|q|\right) |z| \\
&\leq& \frac{2}{\epsilonta^2} \left(\frac{|\mathcal{H}(0)|}{n+1}\right)^{\frac{1}{n+1}} (\frac{1+r}{1-r})^{\frac{1}{n+1}} \left(\sup_{B_{2\epsilonta}(0)}|q_0|\right) |z|
\epsilonnd{eqnarray*}
for all $z \in B_\epsilonta(0)$, in view of the Cauchy's inequality and $|\frac{\sigmama(z)}{\sigmama_0(z)}-1|= |\frac{q^{n+1}(z)}{q^{n+1}_0(z)}-1| \leq r$ for all $z \in B_{3\epsilonta}(0)$. Therefore, we can find $\rho_1$ small so that $|1-\frac{q'(z)}{q'(0)}|\leq \frac{1}{2}$ for all $z \in B_{\rho_1^{\frac{1}{n+1}}}(0)$ and $2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}\leq 2\rho_1^{\frac{1}{n+1}}[\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1-r)^{-\frac{1}{n+1}}\leq 2 \epsilonta$, uniformly for $\sigmama\in \mathcal{B}_r$. Hence, the inverse map $q^{-1}$ of $q$ is defined from $B_{\rho_1^{\frac{1}{n+1}}}(0)$ into $B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$: for all $y \in B_{\rho_1^{\frac{1}{n+1}}}(0)$ there exists a unique $z \in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$ so that $q(z)=y$, given by $z=q^{-1}(y)$. Since $\sigmama=q^{n+1}$ in $B_{3\epsilonta}(0)$, we have that
$$\hbox{Card } \{z\in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0): \sigmama(z)=y\}=n+1 \qquad \quad\mbox{for all}\quadl\: y \in B_{\rho_1}(0)\setminus \{0\},$$
for all $\sigmama\in \mathcal{B}_r$. Since
$$|\sigmama(z)|\geq (1-r) \inf_{\tildelde {\Omega}ega \setminus B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)} |\sigmama_0(z)|
\geq (1-r) \inf_{\tildelde {\Omega}ega \setminus B_{2\rho_1^{\frac{1}{n+1}} [\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1+r)^{-\frac{1}{n+1}}}(0)} |\sigmama_0(z)|>0 $$
for all $z \in \tildelde {\Omega}ega \setminus B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$ we can find $\rho$ ($\leq \rho_1$) small so that
$$\hbox{Card } \{z\in \tildelde {\Omega}ega: \sigmama(z)=y\}=\hbox{Card } \{z\in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0): \sigmama(z)=y\}=n+1 \qquad \quad\mbox{for all}\quadl\: y \in B_{\rho}(0)\setminus \{0\},$$
for all $\sigmama\in \mathcal{B}_r$. Since
$$\sigmama^{-1}(B_\rho(0)) \subset B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}} (0) \subset B_{2\rho_1^{\frac{1}{n+1}}[\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1-r)^{-\frac{1}{n+1}} }(0) \subset B_{2\epsilonta}(0),$$
for all $z\in \partial \sigmama^{-1}(B_\rho(0))=\sigmama^{-1}(\partial B_\rho(0))$ and $\sigmama \in \mathcal{B}_r$ we have that
$$\frac{|z|^{n+1}}{\rho}=\frac{|z|^{n+1}}{|\sigmama(z)|}=\frac{1}{|Q(z)|}\geq \frac{1}{(1+r)} \inf_{ B_{2\epsilonta}(0)} |Q_0(z)|^{-1} >0$$
for $q_0$ is well-defined in $B_{3\epsilonta}(0)$. We can summarize the above discussion as follows:
\begin{lem} \lambdabdabel{gomme}
There exist $r,\:\rho >0$ such that $q(z)=z Q(z)^{\frac{1}{n+1}}$ is a locally bi-holomorphic map with $\sigmama=q^{n+1}$ and inverse $q^{-1}$ defined on $B_{\rho^{\frac{1}{n+1}}}(0)$, for all $\sigmama\in \mathcal{B}_r$. In particular, there exists a neighborhhod $V$ of $0$ so that, for all $\sigmama \in \mathcal{B}_r$, there hold $V \subset \sigmama^{-1}(B_\rho(0))$ and $\sigmama:\sigmama^{-1}(B_\rho(0)) \to B_\rho(0)$ is a $(n+1)-1$ map in the following sense:
$$\hbox{Card } \{z\in \tildelde {\Omega}ega:\sigmama(z)=y\}=n+1 \qquad \quad\mbox{for all}\quadl\: y \in B_\rho(0)\setminus \{0\}.$$
\epsilonnd{lem}
\noindent For $|a|<\rho$ and $\sigmama\in \mathcal{B}_r$, by Lemma \ref{gomme} we have that
$$\sigmama^{-1}(a)=\{z \in \tildelde {\Omega}ega: \: \sigmama(z)=a\}=\{a_0,\dots,a_n \},$$
where $a_k=q^{-1}(\hat a_k)$ and $\hat a_k$, $k=0,\dots,n$, are the $(n+1)-$roots of $a$, and then
$g_{a,\sigmama}(z):=\displaystyle \frac{\sigmama(z)-a}{\prod_{k=0}^{n}(z-a_k)} \in \mathcal{M}(\overline{{\Omega}ega})$ is a non-vanishing function. We are now in position to prove the following.
\begin{lem}\lambdabdabel{derivca}
Up to take $\rho$ smaller, there exists a $C^1-$map $a \in
B_\rho(0) \to \sigmama_a \in \mathcal{B}_r$ so that $\sigmama_a$ solves \epsilonqref{sigmaa}-\epsilonqref{ca}.
Moreover, the map $a \in B_\rho(0) \to c_a=c_{a,\sigmama_a}$ is $C^1$
with
\begin{eqnarray*}
&& \Gamma:=\mathcal{H}(0) \partial_a c_a
\Big|_{a=0}=\frac{1}{n!}\frac{d^{n+1}}{dz^{n+1}}\bigg[
\mathcal{H}(z)f_{n+1}(z)\bigg] (0)\\
&&\Upsilon:=\mathcal{H}(0) \partial_{\bar a} c_a \Big|_{a=0}=-{2\pi(n+1)\over |{\Omega}|
n!}\overline{b_{n+1}}\,\frac{d^n \mathcal{H} }{dz^n}(0),
\epsilonnd{eqnarray*}
where
$$f_{n+1}(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}}\left[2\log \frac{w-q_0(z)}{q_0^{-1}(w)-z}+4\pi
H^*(z-q_0^{-1}(w))\right] (0)\:,\qquad b_{n+1}=\frac{1}{(n+1)!}\frac{d^{n+1} q_0^{-1}}{dw^{n+1}}(0).$$
\epsilonnd{lem}
\begin{proof}
Given $c_{a,\sigmama}$ as in \epsilonqref{ca}, equation \epsilonqref{sigmaa}
is equivalent to find zeroes
of the map $\Lambdabda: (a,\sigmama) \in B_\rho(0) \tildemes \mathcal{B}_r \to
\mathcal{M}(\overline{{\Omega}ega})$ given as
$$\Lambdabda(a,\sigmama)=
\sigmama(z)+\left[ \int^z \frac{g^2_{a, \sigmama}(w)}{g^2_{0, \sigmama}(w)} \frac{\mathcal{H}_{a,\sigmama}(w)}{w^{n+2}} e^{-c_{a,\sigmama}w^{n+1}} dw \right]^{-1}.$$
Observe that the zeroes $a_k=a_k(a,\sigmama)=q^{-1}(\hat a_k)$ are continuously
differentiable in $\sigmama$. Differentiating the relation
$\sigmama(a_k)=a$ at $\sigmama_0$ along a direction $R \in
\mathcal{M}'(\overline{{\Omega}ega})$, we have that $\sigmama_0'(a_k(a,\sigmama_0)) \partial_\sigmama a_k(a,\sigmama_0) [R]+R(a_k(a,\sigmama_0))=0$. Since $\sigmama_0'(a_k) \sim a_k^n$ and $R(a_k)\sim a_k^{n+1}$ in view of $\|R\|<\infty$, we get that $\partial_\sigmama a_k(0,\sigmama_0)[R]=0$ for all $R \in \mathcal{M}'(\overline{{\Omega}ega})$. For $z \not= 0$ the function $\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}$ is continuously differentiable in $\sigmama$ with
$$\partial_\sigmama \left(\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)} \right) [R]=a \frac{z^{n+1}}{\prod_{k=0}^{n}(z-a_k)} \frac{R(z)}{\sigmama^2(z)}
+ \frac{\sigmama(z)-a }{\prod_{k=0}^{n}(z-a_k)}\frac{z^{n+1}}{\sigmama(z)} \sum_{j=0}^n \frac{1}{z-a_j} \partial_\sigmama a_j(a,\sigmama) [R]$$ for every $R \in
\mathcal{M}'(\overline{{\Omega}ega})$. In particular, we get that
$\partial_\sigmama \left(\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)} \right)\Big|_{a=0} [R]=0$ for every $z \not= 0$ and $R \in
\mathcal{M}'(\overline{{\Omega}ega})$. Since we can write $\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}$ as
\begin{equation} \lambdabdabel{1310}
\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}=\frac{z^{n+1}}{\sigmama(z)} \prod_{k=0}^{n} \frac{q(z)-q(a_k)}{z-a_k}=
\frac{z^{n+1}}{\sigmama(z)} \prod_{k=0}^{n} \int_0^1 q'(a_k+t(z-a_k))dt
\epsilonnd{equation}
for $z$ small in view of $\sigmama=q^{n+1}$, we get that $\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}$ is continuously differentiable in $\sigmama$ and
the linear operator $\partial_\sigmama \left( \frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}\right)$ is continuous at
$z=0$. In particular, we get that $\partial_\sigmama\left(\frac{g_{a,\sigmama_0}(z)}{g_{0,\sigmama_0}(z)}\right)\Big|_{a=0}[R]=0$ for
every $z$ and $R \in \mathcal{M}'(\overline{{\Omega}ega})$. By \epsilonqref{Hasigma} we have that $\mathcal{H}_{a,\sigmama}$ is continuously differentiable in $\sigmama$ with $\partial_\sigmama \mathcal{H}_{0,\sigmama}[R]=0$ for every $R \in
\mathcal{M}'(\overline{{\Omega}ega})$. We have that $c_{a,\sigmama}$ is also continuosuly differentiable in $\sigmama$ with $\partial_\sigmama c_{0,\sigmama_0}[R]=0$ for every $R \in
\mathcal{M}'(\overline{{\Omega}ega})$, and so $\Lambdabda (a,\sigmama)$ is with $\partial_\sigmama \Lambdabda(0,\sigmama_0)=\hbox{Id}$.
\noindent Since $a_k \sim |a|^{\frac{1}{n+1}}$, the smooth dependence in $a$ is much more delicate, and will be true just for symmetric expressions of the $a_k$'s thanks to the symmetries of $\hat a_k=q(a_k)$. To fully exploit the symmetries, it is crucial that the expression \epsilonqref{Hasigma} of $\mathcal{H}_{a,\sigmama}$ is in terms of an holomorphic function $H^*$. Indeed, we have that
\begin{eqnarray*}
2\sum_{k=0}^n H^*(z-a_k)-\frac{z}{|{\Omega}ega|} \overline{\sum_{k=0}^n a_k}
&=&2 \sum_{l=0}^{\infty} g_l(z) \sum_{k=0}^n \hat a_k^l
-{z \over
|{\Omega}|}\overline{\sum_{l=1}^\infty b_l \sum_{k=0}^n \hat a_k^l}\\
&=& 2 (n+1)\sum_{l=0}^{\infty} g_{(n+1)l}(z) a^l -{n+1 \over
|{\Omega}|}z\overline{\sum_{l=1}^\infty b_{(n+1)l} a^l}
\epsilonnd{eqnarray*}
in view of $\displaystyle \sum_{k=0}^n \hat a_k^l=0$ for all $l \notin (n+1)\mathbb{N}$, where $g_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} [H^*(z-q^{-1}(w))](0)$ and $b_l=\frac{1}{l!}\frac{d^l q^{-1}}{dw^l}(0)$ (recall that $b_0=q^{-1}(0)=0$). Since for $z$ small there holds
\begin{eqnarray*}
\sum_{k=0}^n \log \frac{q(z)-q( a_k)}{z-a_k}=\sum_{l=0}^\infty h_l(z) \sum_{k=0}^n \hat a_k^l=(n+1) \sum_{l=0}^\infty h_{(n+1)l}(z) a^l
\epsilonnd{eqnarray*}
in view of $a_k=q^{-1}(\hat a_k)$, where $h_l(z)=\frac{1}{l!} \frac{d^l}{dw^l} \left[\log \frac{w-q(z)}{q^{-1}(w)-z}\right](0),$
we have that $\frac{g_{a,\sigmama}(z)}{g_{0,\sigmama}(z)}$ is continuously differentiable in $a,\, \bar a$ for all $z$ in view of \epsilonqref{1310} (for $z$ far from $0$ it is obvious). Hence, by \epsilonqref{Hasigma} $\frac{g_{a,\sigmama}^2}{g_{0,\sigmama}^2} \mathcal{H}_{a,\sigmama}$, $c_{a,\sigmama}$ and $\Lambdabda(a,\sigmama)$ are continuously differentiable also in $a,\, \bar a$, and then $\Lambdabda$ is a $C^1-$map with $\Lambdabda(0,\sigmama_0)=0$, $\partial_\sigmama
\Lambdabda(0,\sigmama_0)=\hbox{Id}$. Up to take $\rho$ smaller, by the Implicit
Function Theorem we find a $C^1$-map $a \in B_\rho(0) \to \sigmama_a$ so
that $\Lambdabda(a,\sigmama_a)=0$, and the function $a \to c_a=c_{a,\sigmama_a}$ is $C^1$. By
$$\partial_a [\frac{g_{a,\sigmama}^2(z) g_{0,\sigmama}^2(0)}{g_{a,\sigmama}^2(0) g_{0,\sigmama}^2(z) } \frac{\mathcal{H}_{a,\sigmama}(z)}{\mathcal{H}_{a,\sigmama}(0)}](0)=
\frac{g_{0,\sigmama}^2(0)}{g_{0,\sigmama}^2(z) }\partial_a [e^{2\log g_{a,\sigmama}(z)-2\log g_{a,\sigmama}(0)}\frac{\mathcal{H}_{a,\sigmama}(z)}{\mathcal{H}_{a,\sigmama}(0)}](0)=
(n+1)\frac{\mathcal{H}(z)}{\mathcal{H}(0)}[f_{n+1}(z)-f_{n+1}(0)]$$
and
$$\partial_{\bar a} [\frac{g_{a,\sigmama}^2(z) g_{0,\sigmama}^2(0)}{g_{a,\sigmama}^2(0) g_{0,\sigmama}^2(z) } \frac{\mathcal{H}_{a,\sigmama}(z)}{\mathcal{H}_{a,\sigmama}(0)}](0)=
-\frac{2\pi (n+1)}{|{\Omega}ega|} \frac{\mathcal{H}(z)}{\mathcal{H}(0)} \overline{b_{n+1}}z$$
we deduce the desired expression for $\Gamma$ and $\Upsilon$ in view of $\partial_\sigmama c_{0,\sigmama_0}=0$ and \epsilonqref{pc}. \qed \epsilonnd{proof}
\section{\hspace{-0.5cm}: The linear theory}
\noindent In this section, we will prove the invertibility of the
linear operator $L$ given by (\ref{ol}) under suitable
orthogonality conditions. The operator $L$ can be described asymptotically by the following
linear operator in ${\mathbb{R}}^2$
$$L_0(\phi)=\displaystyleelta\phi+{8(n+1)^2|y|^{2n}\over (1+|y^{n+1}-\zeta_0|^2)^2}\phi,$$
where $\zeta_0=\lim \frac{a}{\deltalta}$. When $\zeta_0=0$, as in the case $n=0$ \cite{BaPa}, by using a Fourier decomposition
of $\phi$ it can be shown in a rather direct way that the bounded
solutions of $L_0(\phi)=0$ in ${\mathbb{R}}^2$ are precisely linear combinations of
\begin{equation*}
Y_{0}(y) = \,{1-|y|^{2n+2}\over 1+|y|^{2n+2}}
\qquad\text{and}\qquad Y_l(y) = { (y^{n+1})_l \over
1+|y|^{2n+2}} ,\:l=1,2.
\epsilonnd{equation*}
Note that $L_0$ is the linearized operator at the radial solution
$U=U_{1,0}$ of $-\displaystyleelta U=|z|^{2n} e^U$.
\noindent For the linearized operator at $U_{1,\zeta_0}$ with $\zeta_0 \not=0$, the Fourier decomposition
is useless since $U_{1,\zeta_0}$ is not radial w.r.t. any point if $n \geq 1$. However, the same property is still true as recently
proved in \cite{DEM5}, and the argument below could be carried out in full generality in the range $a=O(\deltalta)$. Since in Theorem \ref{main} we are concerned with the case $a=o(\deltalta)$, for simplicity we will discuss the linear theory just in this case.
\noindent Recall that
\begin{equation*}
Z_{0}(z) = {\deltalta^2-|\sigmama(z)-a|^2\over
\delta^2+|\sigmama(z)-a|^2}, \qquad\quad Z_l(z) =
{\delta[\sigmama(z)-a]_l \over
\delta^2+|\sigmama-a|^2},\qquad l=1,2,
\epsilonnd{equation*}
and $PZ_l$, $l=0,1,2$, denotes the projection of $Z_l$ onto the doubly-periodic functions with
zero average:
\begin{equation*}
\left\{ \begin{array}{ll} \displaystyleelta PZ_l =\displaystyleelta
Z_l-\frac{1}{|{\Omega}|}\int_{\Omega} \displaystyleelta Z_l & \text{in
${\Omega}ega$}\\
\int_{\Omega} PZ_l=0.&
\epsilonnd{array}\right.
\epsilonnd{equation*}
Given $h\in L^\infty({\Omega})$ with $\int_{\Omega} h=0$, consider the
problem of finding a function $\phi$ in ${\Omega}$ with zero average
and numbers $d_l$, $l=0,1,2$, such that
\begin{equation}\lambdabdabel{plco}
\left\{ \begin{array}{ll}
L(\phi) =h + \displaystyle \sum_{l=0}^{2}d_l \displaystyleelta PZ_l &\text{ in ${\Omega}$}\\
\int_{{\Omega}ega } \displaystyleelta PZ_l \phi = 0 &\quad\mbox{for all}\quadl\: l=0,1,2.
\epsilonnd{array} \right.
\epsilonnd{equation}
Since $Z=Z_1+iZ_2$, observe that (\ref{plco}) is equivalent to
solve (\ref{plcobis}) with $d=d_1-id_2$. Let us stress that
the orthogonality conditions in \epsilonqref{plco} are taken with
respect to the elements of the approximate kernel due to
translations and to an extra element which involves dilations. A
similar situation already appears in \cite{DDeMW}.
\noindent First, we will prove an a-priori estimate for problem
\epsilonqref{plco} when $d_l=0$ for all $l=0,1,2$, w.r.t. the
$\|\cdot\|_*$-norm defined as
$$\|h\|_*=\sup_{z\in {\Omega}}{(\delta^2+|\sigmama(z)-a|^2)^{1+\gammama/2}\over \delta^\gammama(|\sigmama'(z)|^2+\delta^{2n\over n+1})}|h(z)|,$$
where $0<\gammama <1$ is a small fixed constant.
\begin{prop} \lambdabdabel{p1} There exist $\epsilonta_0>0$ small and $C>0$ such that
for any $0<\delta\leq \epsilonta_0$, $\epsilon^2\leq \epsilonta_0 \delta^{2\over n+1}$,
$|a|\leq \epsilonta_0 \delta$ and any solution $\phi$ to
\begin{equation}\lambdabdabel{plco1}
\left\{ \begin{array}{ll}
L(\phi)=h &\text{in }{\Omega}\\
\int_{{\Omega}ega } \displaystyleelta PZ_l \phi = 0 &\quad\mbox{for all}\quadl\:l=0,1,2\\
\int_{\Omega}ega \phi=0,&
\epsilonnd{array} \right.
\epsilonnd{equation}
one has
\begin{equation}\lambdabdabel{est}
\|\phi \|_\infty \le C \log \frac{1}{\delta} \|h\|_*.
\epsilonnd{equation}
\epsilonnd{prop}
\begin{proof} The proof of estimate \epsilonqu{est} consists of several steps. Assume by contradiction the existence of sequences $\delta_k \to 0$, $\epsilon_k$ with $\epsilon_k^2=o(\delta_k^{2\over n+1})$, $a_k$ with $a_k=o(\delta_k)$, functions $h_k$ with $|\log \delta_k| \, \|h_k\|_*=o(1)$ as $k \to +\infty$, and solutions $\phi_k$ of (\ref{plco1}) with $\|\phi_k\|_\infty=1$. Since by (\ref{ol}) the operator $L$ acts as $L(\phi) = \displaystyleelta \phi + \mathcal{K} \left[ \phi+ \gammama(\phi)\right]$, where $\gammama(\phi) \in \mathbb{R}$, the function $\psi_k=\phi_k+\gammama(\phi_k)$ does solve
\begin{equation*}
\left\{ \begin{array}{ll}
\displaystyleelta \psi_k+\mathcal{K}_k \psi_k= h_k &\text{in ${\Omega}$}\\
\int_{{\Omega}ega } \displaystyleelta PZ_{k,l} \psi_k= 0 &\quad\mbox{for all}\quadl \: l=0,1,2,
\epsilonnd{array}\right.
\epsilonnd{equation*}
where $W_k$, $\mathcal{K}_k$, $Z_{k,l}$ denote the functions $W$, $\mathcal{K}$, $Z_l$,
respectively, along the given sequence.
\begin{claim}
$\displaystyle \liminf_{k \to +\infty} \|\psi_k\|_\infty
>0$ and, up to a subsequence, $\psi_k \to \tilde c \in \mathbb{R}$ as $k \to+\infty$ in $C^{1,\alphaha}_{\hbox{loc}}(\bar {\Omega}\setminus\{0\})$, for all $\alphaha \in (0,1)$.
\epsilonnd{claim}
\noindent Indeed, assume by contradiction that $\displaystyle
\liminf_{k \to +\infty} \|\psi_k\|_\infty =0$. Up to a
subsequence, assume that
$\|\psi_k\|_\infty=\left\|\phi_k+\gammama(\phi_k)\right\|_\infty\to 0$ as
$k\to+\infty$. Since $\epsilon_k^2=o(\delta_k^{2\over n+1})$, by (\ref{BW})
it follows that
$$\gammama(\phi_k)=-\frac{\int_{\Omega} e^{u_0+W_k}\phi_k}{\int_{\Omega}
e^{u_0+W_k}}+o(1)=O(1).$$ Up to a subsequence we have that
$\frac{\int_{\Omega} e^{u_0+W_k}\phi_k}{\int_{\Omega} e^{u_0+W_k}} \to c$, and then
$\phi_k \to c$ uniformly in ${\Omega}ega$ as $k \to +\infty$. Since
$\int_{\Omega}\phi_k=0$, we get $c=0$ and $\phi_k \to 0$ in
$L^\infty({\Omega}ega)$, in contradiction with $\|\phi_k\|_\infty=1$.
Moreover, since $\|\psi_k\|_\infty=O(1)$, by
(\ref{eps1})-(\ref{eps2}) we have that $\displaystyleelta\psi_k=o(1)$ in
$C_{\hbox{loc}}(\bar {\Omega}ega \setminus \{0\})$. Up to a
subsequence, we have that $\psi_k \to \psi$ as $k\to+\infty$ in
$C^{1,\alphaha}_{\hbox{loc}}(\bar {\Omega}\setminus\{0\})$. Since $\|\psi_k
\|_\infty =O(1)$, $\psi$ is a bounded function which can be
extended to a harmonic doubly-periodic function in ${\Omega}$.
Therefore, $\psi=\tilde c$ in ${\Omega}$ with $\ds\tilde
c=\lim_{k\to+\infty}\gammama(\phi_k)$, since $\ds{1\over
|{\Omega}|}\int_{\Omega}\psi_k=\gammama(\phi_k)$.
\noindent Now, consider the function $\Psi_{k}(y)=\psi_k ( \deltalta_k^{1\over n+1} y)$. Then, $\Psi_k$ satisfies
$$\displaystyleelta \Psi_{k} + K_{k}(y)\Psi_{k} =\hat h_{k}(y)\qquad\text{in }
\delta_k^{-\frac{1}{n+1}} {\Omega}ega ,$$ where $
K_{k}(y)=\delta_k^{2\over n+1} \mathcal{K}_k (\delta_k^{1\over n+1}y)$ and
$ \hat h_{k}(y)=\delta_k^{2\over n+1} h_k (\delta_k^{1\over n+1} y)$.
Also, we set $\sigmama_k(y)=\deltalta_k^{-1} \sigmama_{a_k}(\delta_k^{1\over n+1}y)$ for $y$ in
compact subsets of ${\mathbb{R}}^2$.
\begin{claim}
$\Psi_{k} \to \Psi=0$ in $C_{\hbox{loc}}({\mathbb{R}}^2)$ as $k\to+\infty$.
\epsilonnd{claim}
\noindent Indeed, observe that by (\ref{BW}) and
(\ref{eps1})-(\ref{eps2}) we have the following expansions:
\begin{equation} \lambdabdabel{mlK}
\mathcal{K}(z)=|\sigmama'(z)|^2 e^{U_{\delta,a}}
\left[1+O(|c_a||z|^{n+1})+O(|c_a||a|+\deltalta^2|\log \deltalta|) \right]+O(\epsilonpsilon^2
|\sigmama'(z)|^4 e^{2U_{\delta,a}}) .
\epsilonnd{equation}
Since $\epsilon_k^2=o(\delta_k^{2\over n+1})$, the first
estimate above re-writes along our sequence as
$$ K_{k}(y)=(1+o(1)+O(\deltalta_k|y|^{n+1})){8|\sigmama_k'(y)|^2 \over \left(1+\big|\sigmama_k(y)-a_k \delta_k^{-1}\big|^2\right)^2}+o(1) {64|\sigmama_k'(y)|^4 \over \left(1+\big|\sigmama_k(y)-a_k \delta_k^{-1}\big|^2\right)^4}$$
uniformly in $\delta_k^{-\frac{1}{n+1}} {\Omega}ega$ as $k\to+\infty$.
Since $\sigmama=z^{n+1}Q$, we have that $\sigmama_k(y)= y^{n+1} Q_{a_k}( \deltalta_k^{\frac{1}{n+1}} y)$ and $\sigmama_k'(y)=(n+1) y^n Q_{a_k}(\deltalta_k^{\frac{1}{n+1}} y)+
\deltalta_k^{\frac{1}{n+1}} y^{n+1} Q_{a_k}'(\deltalta_k^{\frac{1}{n+1}} y)$. Since $Q_{a_k}(0) \to \frac{n+1}{\mathcal{H}(0)}=:\gammama\not=0$ and $\|Q_{a_k}'\|_{\infty,{\Omega}ega}\leq
C \|Q_{a_k}\|_{\infty,\tildelde {\Omega}ega}\leq C'$, we have
that
$$\sigmama_k(y)=y^{n+1}[\gammama+o(1)+O(\delta_k^{1\over n+1}|y|)] \:,\qquad
\sigmama_k'(y)=(n+1) y^n [\gammama+o(1)+ O(\deltalta_k^{1 \over n+1}|y|)]$$
as $k \to +\infty$. Then we get that
\begin{equation} \lambdabdabel{Kk}
K_{k}(y)=\left[{8(n+1)^2 \gammama|^2 |y|^{2n} \over \left(1+\big|\sigmama_k(y)-a_k
\delta_k^{-1}\big|^2\right)^2}+o(1) {64(n+1)^4|\gammama|^4 |y|^{4n} \over
\left(1+\big|\sigmama_k(y)-a_k \delta_k^{-1}\big|^2\right)^4} \right][1+o(1)+O(\delta_k^{1\over
n+1}|y|)]
\epsilonnd{equation}
uniformly in $\delta_k^{-\frac{1}{n+1}} {\Omega}ega$. Choose $\epsilonta$ small so that $|\sigmama_k(y)|\geq
\frac{|\gammama|}{2}|y|^{n+1}$ in $B_{\delta_k^{-\frac{1}{n+1}}\epsilonta}(0)$ for $k$ large. Since $\|\Psi_k\|_\infty=O(1)$ and $|\hat h_k(y)|\le C\|h_k \|_*
\to 0$ on compact sets, by elliptic estimates and (\ref{Kk}) we get that $\Psi_k(\gammama^{-\frac{1}{n+1}} y) \to \hat \Psi$ in $C_{\hbox{loc}}({\mathbb{R}}^2)$ as $k \to+\infty$, where
$\hat \Psi$ is a bounded solution of $L_0(\hat \Psi) = 0$ (with $\zeta_0=0$). Then
$\hat \Psi(y)=\displaystyle \sum_{j=0}^2
b_{j}Y_j(y)$ for some $b_{j}\in{\mathbb{R}}$, $j=0,1,2$.\\
Since $\lambdabdap Z_{k,l}+|\sigmama_k'|^2 e^{U_{\delta_k,a_k}}Z_{k,l}=0$ for $l=0,1,2$ (where $U_{\delta_k,a_k}$ stands for $U_{\delta_k,a_k,\sigmama_{a_k}}$), for $l=1,2$ we have that
\begin{equation*}
\begin{split}
\int_{\Omega} \psi_k \displaystyleelta Z_{k,l}
=-\int_{\Omega}ega |\sigmama_k'(z)|^2 \psi_k e^{U_{\delta_k,a_k}}Z_{k,l}=-\int_{B_{\delta_k^{-\frac{1}{n+1}} \epsilonta}(0)}
{8 |\sigmama_k'(z)|^2 (\sigmama_k-a_k \delta_k^{-1}) \Psi_k \over
(1+|\sigmama_k-a_k \delta_k^{-1} |^2 )^3}\,dy+O(\delta_k^3).
\epsilonnd{split}
\epsilonnd{equation*}
Since for all $l=0,1,2$
$$0=\int_{\Omega} \psi_k\displaystyleelta PZ_{k,l}=\int_{\Omega} \psi_k \left[ \displaystyleelta Z_{k,l} -
{1\over |{\Omega}|}\int_{\Omega} \displaystyleelta Z_{k,l}\right]=\int_{\Omega} \psi_k \displaystyleelta Z_{k,l}+o(1)$$
as $k \to \infty$ in view of \epsilonqref{deltaZ0}-\epsilonqref{deltaZ}, by dominated convergence we get that
$$\int_{{\mathbb{R}}^2} \hat \Psi(y)\,\frac{|y|^{2n}(y^{n+1})_l}{(1+|y|^{2n+2})^3}\, dy =0 \qquad
\hbox{for }l=1,2,$$ and we conclude that $b_{1}=b_{2}=0$.
Similarly, for $l=0$ we deduce that
$$\int_{{\mathbb{R}}^2} \hat \Psi(y)\,{|y|^{2n}(1-|y|^{2n+2})\over (1+|y|^{2n+2})^3}\, dy=0,$$
which implies that $b_0=0$. Thus, the claim follows.
\noindent On the other hand, from the equation of $\psi_k$ we have the following integral representation
\begin{equation}\lambdabdabel{irsn}
\psi_k(z)={1\over |{\Omega}|}\int_{{\Omega}} \psi_k +\int_{\Omega} G(y,z)
\left[\mathcal{K}_k(y) \psi_k(y)-h_k(y) \right]\, dy.
\epsilonnd{equation}
\begin{claim} $\tilde c=0$
\epsilonnd{claim}
\noindent Indeed, Claims 1 and 2 imply that $\psi_k(0)=\Psi_k(0)
\to 0$ and ${1\over |{\Omega}|}\int_{{\Omega}} \psi_k =\gammama(\phi_k)\to \tilde c$ as $k \to
+\infty$ by definition. So, by \epsilonqref{irsn} we deduce
that
$$\int_{\Omega} G(y,0) \left[\mathcal{K}_k(y) \psi_k(y)-h_k(y) \right]\, dy \to -\tilde c $$
as $k \to +\infty$. Now, we first estimate the integral involving
$h_k$. Since $\int_{B_{\delta_k}(0)}|\log|y||\,dy=O(\delta_k^2 \log
\delta_k),$ we get that
\begin{equation*}
\left|\int_{B_{\delta_k}(0)} G(y,0) h_k(y) dy\right| \le {C\over
\delta_k^2}\|h_k\|_* \int_{B_{\delta_k}(0)} G(y,0) dy \le C
|\log\delta_k| \|h_k\|_*.
\epsilonnd{equation*}
By \epsilonqref{1458} we have that
\begin{equation*}
\left|\int_{{\Omega}\setminus B_{\delta_k}(0)} G(y,0) h_k(y) dy\right| \le C
|\log\delta_k| \int_{\Omega}ega |h_k|\leq C |\log \delta_k| \|h_k\|_*,
\epsilonnd{equation*}
and we conclude that
$$\left|\int_{{\Omega}} G(y,0) h_k(y) dy\right|\le C|\log \delta_k|\|h_k\|_* \to 0$$
in view of $|\log \delta_k| \, \|h_k\|_*=o(1)$ as $k \to +\infty$. By
\epsilonqref{mlK} we have that
\begin{equation*}
\begin{split}
&\int_{{\Omega}} G(y,0) \mathcal{K}_k(y) \psi_k(y) dy=\int_{B_\epsilonta(0)}
G(y,0) \mathcal{K}_k(y) \psi_k(y)
dy+O(\delta_k^2)\\
&=\int_{B_{ \delta_k^{-\frac{1}{n+1}}\epsilonta}(0)} \bigg[-{1\over 2\pi
}\log |y|-{1\over 2\pi(n+1)} \log \delta_k+H( \delta_k^{1\over n+1}
y,0)\bigg] K_k(y) \Psi_k(y) dy+O(\delta_k^2).
\epsilonnd{split}
\epsilonnd{equation*}
Since by (\ref{Kk}) $K_{k}=O({|y|^{2n} \over (1+|y|^{2n+2})^2
})$ does hold uniformly in $B_{\delta_k^{-\frac{1}{n+1}}\epsilonta}(0)
\setminus B_1(0)$ and $ K_{k}(y) \to {8(n+1)^2|y|^{2n} \over
(1+|y|^{2n+2})^2}$ as $k\to+\infty$, by dominated convergence we
get that
\begin{eqnarray*}
&&\int_{B_{\delta_k^{-\frac{1}{n+1}}\epsilonta}(0)} \bigg[-{1\over 2\pi }\log |y|+H( \delta_k^{1\over n+1} y,0)\bigg] K_k(y) \Psi_k(y) dy\\
&& \to \int_{\mathbb{R}^2} \bigg[-{1\over 2\pi }\log |y|+H(0,0)\bigg] {8(n+1)^2|y|^{2n} \over (1+|y|^{2n+2})^2} \Psi(y) dy = 0
\epsilonnd{eqnarray*}
as $k \to +\infty$. Since $\int_{\Omega} h_k=0$, the integration of the
equation satisfied by $\psi_k$ gives that $\int_{\Omega} \mathcal{K}_k
\psi_k=0$. Then, by \epsilonqref{mlK} we get that
$$\int_{B_{\delta_k^{-\frac{1}{n+1}}\epsilonta}(0)} K_k \Psi_k dy=\int_{B_\epsilonta(0)} \mathcal{K}_k \psi_k dy=-\int_{{\Omega}\setminus
B_\epsilonta(0)} \mathcal{K}_k \psi_k=O(\delta_k^2),$$ which implies that
$$\log \delta_k \int_{B_{\delta_k^{-\frac{1}{n+1}}\epsilonta}(0)} K_k \Psi_k dy=O(\delta_k^2 \log \delta_k).$$
In conclusion, we have shown that $\int_{{\Omega}} G(y,0) \mathcal{K}_k(y)
\psi_k(y)dy \to 0$ as $k \to +\infty$, yielding to $\tilde c=0$.
\noindent In the following Claims, we will omit the subscript $k$. Let us denote $\tilde L(\psi)=\lambdabdap\psi + \mathcal{K} \psi$.
\begin{claim} The operator $\tilde L$ satisfies the maximum principle in $
B_\epsilonta(0) \setminus B_{R\delta^{1\over n+1}}(0)$ for $R$ large enough.
\epsilonnd{claim}
\noindent Indeed, as already noticed in the proof of the previous
Claim in terms of $K_{k}$, there is $C_1>0$ such that
\begin{equation} \lambdabdabel{salsi}
\mathcal{K}(z)\le C_1 {(n+1)^2 \delta^2|z|^{2n}\over
(\delta^2+|z|^{2n+2})^2}
\epsilonnd{equation}
in $B_\epsilonta(0)\setminus B_{\delta^{1\over n+1}}(0)$. The function
$$\tilde Z(z)=- Y_0\left({ \mu z\over\delta^{1\over n+1}}\right)=\frac{\mu ^{2n+2}|z|^{2n+2}-\delta^2}{\mu^{2n+2}|z|^{2n+2}+\delta^2}$$
satisfies
$$-\lambdabdap \tilde Z(z)=16(n+1)^2 \frac{\delta^2 \mu^{2n+2} |z|^{2n} (\mu^{2n+2}|z|^{2n+2}-\delta^2)}{(\mu^{2n+2}|z|^{2n+2}+\delta^2)^3}.$$
For $R$ large so that $\mu^{2n+2}R^{2n+2}>{5\over 3}$ we have that
\begin{equation*}
\begin{split}
-\lambdabdap \tilde Z(z)&\ge 16(n+1)^2\frac{\delta^2 \mu^{2n+2} |z|^{2n} }{(\mu^{2n+2}|z|^{2n+2}+\delta^2)^2}\,{\mu^{2n+2}R^{2n+2}-1\over \mu^{2n+2}R^{2n+2}+1}\\
&\ge 4(n+1)^2\frac{\delta^2 \mu^{2n+2}
R^{4n+4}}{(\mu^{2n+2}R^{2n+2}+1)^2}\,{1\over
|z|^{2n+4}}\ge{(n+1)^2\over \mu^{2n+2}}{\delta^2\over |z|^{2n+4}}
\epsilonnd{split}
\epsilonnd{equation*}
in $B_\epsilonta(0) \setminus B_{R\delta^{1\over n+1}}(0)$. On the other hand,
since $\tilde Z \le 1$ we have that
$$\mathcal{K} (z)\tilde Z(z)\le C_1{(n+1)^2\delta^2|z|^{2n}\over
(\delta^2+|z|^{2n+2})^2}\le C_1{(n+1)^2\delta^2\over |z|^{2n+4}}$$ in
$B_\epsilonta(0)\setminus B_{\delta^{1\over n+1}}(0)$, and for
$0<\mu<{1\over\sqrt{C_1}}$ we then get that
$$\tilde L(\tilde Z)\le \left(-{1\over \mu^{2n+2}}+C_1\right){(n+1)^2 \delta^2 \over
|z|^{2n+4}}<0$$ in $B_\epsilonta(0) \setminus B_{R\delta^{1\over n+1}}(0)$. Since
$$\tilde Z(x)\ge {\mu^{2n+2}R^{2n+2}-1\over \mu^{2n+2}R^{2n+2}+1}>{1\over
4}$$ for $|z|\geq R\delta^{1\over n+1}$, we have provided the
existence of a positive super-solution for $\tilde L$, a sufficient
condition to have that $\tilde L$ satisfies the maximum principle.
\begin{claim}
There exists a constant $C>0$ such that
$$\|\psi\|_{\infty, B_\epsilonta(0)\setminus B_{R\delta^{1\over n+1}}(0)}\le C[\|\psi\|_i+\|h\|_*],$$ where
$$\|\psi\|_i=\|\psi\|_{\infty, \partial B_{R\delta^{1\over n+1}}(0)}+\|\psi\|_{\infty, {\partial} B_{\epsilonta}(0)}.$$
\epsilonnd{claim}
\noindent Indeed, letting $\Phi$ be the solution of
$$
\left\{ \begin{array}{ll}
-\lambdabdap \Phi=2 \displaystyle \sum_{i=1}^2 {\delta^{\sigmama_i \over n+1} \over |z|^{2+\sigmama_i}}&\hbox{for }R\delta^{1\over n+1} \leq |z| \leq r\\
\Phi=0 &\text{for }|z|=r,\, R\delta^{1\over n+1}
\epsilonnd{array}\right.$$
with $r \in (\epsilonta,2\epsilonta)$, $\sigmama_1=\sigmama (n+1)$ and
$\sigmama_2=2n+\sigmama (n+1)$, we construct a barrier function of the
form $\tilde\Phi=4\|\psi\|_i \tilde Z + \|h\|_* \Phi$. A direct
computation shows that
$$\Phi(z)=2 \sum_{i=1}^2 \delta^{\sigmama_i \over n+1}\left[-\frac{1}{\sigmama_i^2 |z|^{\sigmama_i}} + \alphaha_i \log |z|
+\beta_i \right],$$ where
$$\alphaha_i={1\over \sigmama_i^2 \log {R\delta^{1\over n+1}\over r}}\left({1\over R^{\sigmama_i}\delta^{\sigmama_i \over n+1}}-{1\over
r^{\sigmama_i}}\right)<0,\quad\quad \beta_i={1\over \sigmama_i^2
r^{\sigmama_i}}-{\log r\over \sigmama_i^2\log {R\delta^{1\over n+1}\over
r}}\left({1\over R^{\sigmama_i} \delta^{\sigmama_i \over n+1}}-{1\over
r^{\sigmama_i}}\right)$$ for $i=1,2$. Since
$$0\le \Phi(z)\le 2 \sum_{i=1}^2 \delta^{\sigmama_i \over n+1}\left[-{1\over \sigmama_i^2 r^{\sigmama_i}}+\alphaha_i \log
R\delta^{1\over n+1}+\beta_i\right]=2 \sum_{i=1}^2 \delta^{\sigmama_i \over
n+1} \alphaha_i \log {R\delta^{1\over n+1}\over r}\le \sum_{i=1}^2 {2
\over \sigmama_i^2 R^{\sigmama_i}},$$ we get that
\begin{equation*}
\begin{split}
\tilde L(\tilde\Phi)&\le \| h\|_*\left[-2 {\delta^{\sigmama}\over
|z|^{2+\sigmama(n+1)}}-2{\delta^{\sigmama+{2n \over n+1}} \over
|z|^{2+2n+\sigmama(n+1)}}
+C_1(n+1)^2 {\delta^2|z|^{2n}\over (\delta^2+|z|^{2n+2})^2}\sum_{i=1}^2 {2\over \sigmama_i^2 R^{\sigmama_i}} \right]\\
&\le \| h\|_*\left[-2 {\delta^{\sigmama}\over |z|^{2+\sigmama(n+1)}}-{
\delta^{\sigmama+{2n \over n+1}} \over (\delta^2+|z|^{2n+2})^{1+\sigmama/2}
}
+{\delta^\sigmama |z|^{2n}\over(\delta^2+|z|^{2n+2})^{1+\sigmama/2} } \right]\\
&\le - \| h\|_* {\delta^\sigmama(|z|^{2n}+\delta^{2n\over n+1})\over
(\delta^2+|z|^{2n+2})^{1+\sigmama/2}}
\epsilonnd{split}
\epsilonnd{equation*}
in view of (\ref{salsi}), for $R$ large so that $C_1(n+1)^2
\displaystyle \sum_{i=1}^2 {2\over \sigmama_i^2 R^{\sigmama_i}} \leq
1$. Since $|\psi| \leq \tilde \Phi$ on ${\partial} B_{R\delta^{1\over
n+1}}(0)\cup{\partial} B_r(0)$ in view of $4\tilde Z\ge 1$, by the maximum
principle we conclude that $|\psi|\le \tilde\Phi$ in
$B_\epsilonta(0)\setminus B_{R\delta^{1\over n+1}}(0)$ and the claim
follows.
\noindent Since Claims 2 and 3 provide that $\|\psi_k \|_i \to 0$ as $k \to \infty$, by Claim 5 we conclude that
$\|\psi_k \|_\infty=o(1)$ as $k\to+\infty$, a contradiction with
$\liminf_{k \to +\infty} \|\psi_k\|_\infty>0$ according to Claim
1. This completes the proof. \qed
\epsilonnd{proof}
\noindent We are now in position to solve problem \epsilonqref{plco}.
\begin{prop} \lambdabdabel{p2}
There exists $\epsilonta_0>0$ small such that for any $0<\delta\leq
\epsilonta_0$, $|\log \deltalta| \epsilon^2\leq \epsilonta_0 \delta^{2\over n+1}$, $|a|\leq \epsilonta_0 \delta$
and $h \in L^\infty({\Omega}ega)$ with $\int_{\Omega}ega h=0$ there is a
unique solution $\phi:=T(h)$, with $\int_{\Omega}ega \phi=0$, and
$d_0,d_1,d_2 \in \mathbb{R}$ of problem \epsilonqref{plco}. Moreover,
there is a constant $C>0$ so that
\begin{equation}\lambdabdabel{est1}
\|\phi \|_\infty \le C\left(\log \frac 1\delta \right)\|h\|_*,\quad
\sum_{l=0}^2 |d_{l}|\le C\|h\|_*.
\epsilonnd{equation}
\epsilonnd{prop}
\begin{proof} Since $-\displaystyleelta Z_l=|\sigmama'(z)|^{2} e^{U_{\delta,a}} Z_l$ in ${\Omega}$ (where $U_{\delta,a}$ stands for $U_{\delta,a,\sigmama_a}$) and $\int_{\Omega}ega \displaystyleelta Z_l=O(\delta^2)$ in view of \epsilonqref{deltaZ0}-\epsilonqref{deltaZ}, we have that $\displaystyleelta PZ_l =O(|\sigmama'(z)|^{2} e^{U_{\delta,a}})+O(\delta^2)$ in view of $Z_l=O(1)$, yielding to $\|\lambdabdap PZ_{l}\|_*\le C$ for all $l=0,1,2$. By Proposition \ref{p1} every solution of \epsilonqu{plco} satisfies
$$\|\phi\|_\infty\le
C\left(\log{1\over\delta}\right)\left[\|h\|_*+\sum_{l=0}^2|d_{l}|\right].$$ Set
$\lambdabdangle f,g\rangle=\int_{\Omega} fg$ and notice that
\begin{equation}\lambdabdabel{diesis}
\ds\lambdabdangle L(\phi),PZ_{j}\rangle = \ds\lambdabdangle
L(\phi),PZ_{j}+t\rangle =\lambdabdangle \phi+\gammama(\phi),\tilde
L(PZ_{j}+t)\rangle
\epsilonnd{equation}
for any $t\in \mathbb{R}$, in view of $\int_{\Omega}ega L(\phi)=0$. To
estimate the $|d_{l}|$'s, let us test equation \epsilonqu{plco} against
$PZ_{j}$, $j=0,1,2$, to get
$$\big\lambdabdangle \phi+\gammama(\phi),\tilde L(PZ_{j}+t_j)\big\rangle =\lambdabdangle
h,PZ_{j}\rangle + \sum_{l=0}^2d_{l}\lambdabdangle \lambdabdap
PZ_{l},PZ_{j}\rangle$$
where $t_j=\frac{1}{|{\Omega}ega|}\int_{\Omega}ega Z_j$, $j=0,1,2$. From the
proof of Lemma \ref{1039} we know that for $Z_0$ and $Z=Z_1+iZ_2$
there hold the following:
\begin{eqnarray*}
&& \int_{\Omega} \lambdabdap PZ_0PZ_0=- 16 (n+1) \int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4} +O(\delta^2)\,,\qquad \int_{\Omega} \lambdabdap PZPZ_0=O(\delta^2)\\
&&\int_{\Omega} \lambdabdap PZ \overline{PZ}=- 8 (n+1) \int_{\mathbb{R}^2}
\frac{|y|^2 }{(1+|y|^2)^4} +O(\delta)\,, \qquad \int_{\Omega} \lambdabdap PZ
PZ=O(\delta)
\epsilonnd{eqnarray*}
where $ \int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^4}=2
\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4}=\frac{\pi}{3}$. In
terms of the $Z_l$'s we then have that
$$\lambdabdangle \lambdabdap PZ_{l},PZ_{j}\rangle=-(n+1)C_{ij} \delta_{lj}+O(\delta^2),$$
where $\delta_{lj}$ denotes the Kronecker's symbol and $c_{00}={8\pi\over 3}$, $c_{11}=c_{22}={4\pi\over 3}$. For $j=0,1,2$ let
us now estimate $\big\|\tilde L(PZ_j+t_j)\big\|_*$:
\begin{equation}\lambdabdabel{diesisdiesis}
\big\|\tilde L(PZ_j+t_j)\big\|_*=\big\|-|\sigmama'(z)|^2 e^{U_{\delta,a}}
Z_j+\mathcal{K}(PZ_j+t_j)+O(\deltalta^2) \big\|_*=O( \delta+\epsilonpsilon^2
\deltalta^{-\frac{2}{n+1}}+\delta|c_a|)
\epsilonnd{equation}
in view of \epsilonqref{deltaZ0}-(\ref{pzij}) and (\ref{mlK}). Since
$|\gammama(\phi)|=O(\|\phi\|_\infty)$ in view of (\ref{BW}) and
$\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}}=o(1)$, by \epsilonqref{1458} we get that
$$\lambdabdangle \phi+\gammama(\phi),\tilde L(PZ_{j}+t_j)\rangle=O(\delta+\epsilonpsilon^2 \deltalta^{-\frac{2}{n+1}}) \|\phi\|_\infty,$$
which along the previous estimates yields to
\begin{equation}\lambdabdabel{estcij}
\begin{split}
|d_j|\le
C\bigg[(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+\|h\|_*+\deltalta \sum_{l=0}^2|d_l| \bigg]
\epsilonnd{split}
\epsilonnd{equation}
in view of $PZ_j=O(1)$. Since (\ref{estcij}) gives that
$\displaystyle \sum_{l=0}^2|d_{l}|=O(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+O(\|h\|_*)$, we have that every
solution of \epsilonqu{plco} satisfies
$$\|\phi\|_\infty\le
C\left(\log{1\over\delta}\right)\left[\|h\|_*+\sum_{l=0}^2|d_{l}|\right] \leq
C\log{1\over\delta}(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+C \log{1\over\delta} \|h\|_*.$$ In view of
$\log{1\over\delta} (\delta+\epsilon^2\delta^{-{2\over n+1}})=o(1)$ as $\epsilonta_0\to 0$, the a-priori estimates \epsilonqref{est1} immediately follow.
\noindent To solve \epsilonqu{plco}, consider now the space
$$H=\left\{\phi\in H^1({\Omega}ega) \hbox{ doubly-periodic}: \: \int_{\Omega}ega \phi=0\,,\:\int_{\Omega}\lambdabdap PZ_{l}\,\phi=0 \hbox{ for }l=0,1,2\right\}$$
endowed with the usual inner product
$[\phi,\psi]=\int_{\Omega}\nabla\phi\nabla\psi.$ Problem \epsilonqref{plco} is
equivalent to finding $\phi\in H$ such that
$$[\phi,\psi]=\int_{\Omega}\left[\mathcal{K} \left(\phi+\gammama(\phi)\right)-h\right]\psi\qquad\text{for all }\psi\in H.$$ With the aid
of Riesz's representation theorem, the equation has the form
$(\hbox{Id}-\hbox{compact operator})\phi= \tilde h$. Fredholm's
alternative guarantees unique solvability of this problem for any
$h$ provided that the homogeneous equation has only the trivial
solution. This is equivalent to \epsilonqref{plco} with $h\epsilonquiv 0$,
which has only the trivial solution by the a-priori estimates
\epsilonqref{est1}. The proof is now complete.\qed
\epsilonnd{proof}
\section{\hspace{-0.5cm}: The nonlinear problem}
We consider the following non linear problem
\begin{equation}\lambdabdabel{pnla}
\left\{\begin{array}{ll}
L(\phi)= -[R+N(\phi)] +\displaystyle \sum_{l=0}^{2}d_l \displaystyleelta PZ_{l} & \text{in }{\Omega}\\
\int_{{\Omega}ega } \displaystyleelta PZ_l\phi = 0 \hbox{ for all }l=0,1,2 & \\
\int_{{\Omega}}\phi=0,&
\epsilonnd{array} \right.
\epsilonnd{equation}
where $R$, $N(\phi)$ and $L$ are given by \epsilonqref{R}, \epsilonqref{nlt}
and \epsilonqref{ol}, respectively. Notice that \epsilonqref{linear} and
(\ref{pnla}) are equivalent by setting $d=d_1-id_2$.
\begin{lem}\lambdabdabel{lpnla}
There exists $\delta_0>0$ small such that for any $0<\delta<\epsilonta_0$,
$|\log \deltalta|^2 \epsilon^2\leq \epsilonta_0 \delta^{2\over n+1}$, $|a|\leq \epsilonta_0 \delta$ problem
\epsilonqref{pnla} admits a unique solution $\phi$ and $d_l$, $l=0,1,2$.
Moreover, there exists $C>0$ so that
\begin{equation}\lambdabdabel{cotapsi}
\|\phi\|_\infty\le C|\log\delta|\|R\|_*.
\epsilonnd{equation}
\epsilonnd{lem}
\begin{proof}
In terms of the operator $T$ defined in Proposition \ref{p2},
problem \epsilonqref{pnla} reads as
$$\phi=-T\left(R+N(\phi)\right):=\mathcal{A}(\phi).$$
For a given number $M>0$, let us consider the space
$$
\mathcal{F}_M = \{\phi\in L^\infty({\Omega}) \hbox{ doubly-periodic }:\: \|
\phi \|_\infty \le M|\log\delta| \,\|R\|_* \}.$$ It is a
straightforward but tedious computation to show that
\begin{equation}\lambdabdabel{star}
\|N(\phi_1) - N(\phi_2)\|_* \leq C_1 (\|\phi_1\|_\infty
+\|\phi_2\|_\infty) \|\phi_1-\phi_2\|_\infty.
\epsilonnd{equation}
Just to give an idea on how (\ref{star}) can be proved, observe
that $0\leq \frac{e^{u_0+W+\phi}}{\int_{\Omega}ega e^{u_0+W+\phi}} \leq
e^{2\|\phi\|_\infty} \frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}$ and
$|\int_{\Omega}ega e^{u_0+W+\phi} \phi|\leq \|\phi\|_\infty \int_{\Omega}ega
e^{u_0+W+\phi}$. For $\|\phi\|_\infty\leq 1$ we can then get that
$$\|\phi\|_\infty \|D [\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega e^{u_0+W+\phi}}][\phi]\|_*+\|D^2 [\frac{e^{u_0+W+\phi}}{\int_{\Omega}ega e^{u_0+W+\phi}}][\phi,\phi]\|_*=O(\|\frac{e^{u_0+W}}{\int_{\Omega}ega e^{u_0+W}}\|_* \|\phi\|_\infty^2)
=O(\|\phi\|_\infty^2)$$ in view of $\|\frac{e^{u_0+W}}{\int_{\Omega}ega
e^{u_0+W}}\|_*=O(1)$ by (\ref{eps1}). This exactly what
we need to estimate in $\|\cdot\|_*-$norm the difference between
the first term of $N(\phi_1)$ and $N(\phi_2)$. For the other terms
we can argue in a similar way to get
$$\|\phi\|_\infty \|D [\frac{e^{2(u_0+W+\phi)}}{\int_{\Omega}ega e^{2(u_0+W+\phi)}}][\phi]\|_*+\|D^2 [\frac{e^{2(u_0+W+\phi)}}{\int_{\Omega}ega e^{2(u_0+W+\phi)}}][\phi,\phi]\|_*=O(\| \frac{e^{2(u_0+W)}}{\int_{\Omega}ega e^{2(u_0+W)}}\|_* \|\phi\|_\infty^2)=O(\|\phi\|_\infty^2)$$
in view of $\| \frac{e^{2(u_0+W)}}{\int_{\Omega}ega e^{2(u_0+W)}}\|_* =O(1)$ by
(\ref{eps2}), and
$$\|\phi\|_\infty \|D[B(W+\phi)][\phi]\|_*+\|D^2[B(W+\phi)][\phi,\phi]\|_*=O(B(W)\|\phi\|_\infty^2)=O(\deltalta^{-\frac{2}{n+1}}\|\phi\|_\infty^2)$$
in view of (\ref{BW}). Since $\epsilonpsilon^2
\deltalta^{-\frac{2}{n+1}}=o(1)$ we can deduce the validity of
(\ref{star}).
\noindent Denote by $C'$ the constant present in \epsilonqref{est1}. By Proposition \ref{p2} and (\ref{star}) we get that
$$\|\mathcal{A}(\phi_1)-\mathcal{A}(\phi_2)\|_\infty \leq C'|\log \delta| \|N(\phi_1)-N(\phi_2)\|_*\leq 2C'C_1 M \|R\|_ * \log^2 \delta \|\phi_1-\phi_2\|_\infty$$
for all $\phi_1,\phi_2 \in \mathcal{F}_M$. By Proposition \ref{p2} we also have that
\begin{equation*}
\|\mathcal{A}(\phi)\|_\infty \le C' | \log \delta |\left[ \|R\|_* +
\|N(\phi)\|_*\right]\leq C' | \log \delta | \|R\|_*+C' C_1|\log
\delta| \|\phi\|_\infty^2
\epsilonnd{equation*}
for all $\phi\in \mathcal{F}_M$. Fix now $M$ as $M=2C'$, and by (\ref{ere}) take
$\epsilonta_0$ small so that $4(C')^2 C_1 \log^2 \delta \|R\|_*
< \frac{1}{2}$ in order to have that $\mathcal{A}$ is a contraction
mapping of $\mathcal{F}_M$ into itself. Therefore $\mathcal{A}$ has a unique
fixed point $\phi$ in $\mathcal{F}_M$, which satisfies (\ref{cotapsi})
with $C=M$.\qed
\epsilonnd{proof}
\section{\hspace{-0.5cm}: The integral coefficients in \epsilonqref{solve1b}-\epsilonqref{solve2b}}
Letting $\zeta=\frac{a}{\deltalta}$, we aim to investigate the integral coefficients
$$I:=\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}\,dy\:,\qquad K:=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}y}{(1+|y|^2)^5}\,dy$$
which appear in \epsilonqref{solve1b}-\epsilonqref{solve2b} or \epsilonqref{solve1}-\epsilonqref{solve2}. We will show below that $I=f(|\zeta|)$ and $K=g(|\zeta|)\zeta$ with $f<0<g$, and the asymptotic behavior of $f$ and $g$ as $|\zeta|\to +\infty$ will be identified.
\noindent By the change of variable $y \to y+\zeta$ and the Taylor expansion
$$(1-x)^{-5}=\sum_{k=0}^{+\infty} c_k x^k \quad\hbox{for }|x|<1$$
with $c_k=\frac{(4+k)!}{24\,k!}$, we can re-write $I$ as
\begin{eqnarray*}
I&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y-\zeta|^2-1)}{(1+|y-\zeta|^2)^5}dy
=\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y|^2+|\zeta|^2-1-y\bar \zeta-\bar y \zeta)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy
\epsilonnd{eqnarray*}
in view of
$$(1+|y-\zeta|^2)^{-5}=(1+|y|^2+|\zeta|^2)^{-5}(1-\frac{y\bar \zeta+\bar y \zeta}{1+|y|^2+|\zeta|^2})^{-5}$$
with
$$\frac{|y\bar \zeta+\bar y \zeta|}{1+|y|^2+|\zeta|^2} \leq \frac{|y|^2+|\zeta|^2}{1+|y|^2+|\zeta|^2}<1.$$
Since
$$(y\bar \zeta+\bar y \zeta)^k= \sum_{j=0}^k \left(\begin{array}{l} k \\ j \epsilonnd{array}\right) y^j \bar \zeta^j \bar y^{k-j} \zeta^{k-j}=
\sum_{1\leq j <\frac{k}{2} } \left(\begin{array}{l} k \\ j \epsilonnd{array}\right) \zeta^{k-2j} \bar y^{k-2j} |\zeta|^{2j} |y|^{2j}
+\sum_{\frac{k}{2}<j\leq k} \left(\begin{array}{l} k \\ j \epsilonnd{array}\right) \bar \zeta^{2j-k} y^{2j-k} |\zeta|^{2k-2j} |y|^{2k-2j}
$$
for $k$ odd and
$$(y\bar \zeta+\bar y \zeta)^k=
\sum_{1\leq j <\frac{k}{2} } \left(\begin{array}{l} k \\ j \epsilonnd{array}\right) \zeta^{k-2j} \bar y^{k-2j} |\zeta|^{2j} |y|^{2j}
+\sum_{\frac{k}{2}<j\leq k} \left(\begin{array}{l} k \\ j \epsilonnd{array}\right) \bar \zeta^{2j-k} y^{2j-k} |\zeta|^{2k-2j} |y|^{2k-2j}
+ \left(\begin{array}{l} k \\ \frac{k}{2} \epsilonnd{array}\right) |\zeta|^k |y|^k
$$
for $k$ even, by symmetry we can simplify the expression of $I$ as follows:
\begin{eqnarray*}
I&=&
\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y|^2+|\zeta|^2-1)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy
-\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y\bar \zeta+\bar y \zeta)^{k+1}}{(1+|y|^2+|\zeta|^2)^{5+k}}dy\\
&=&\sum_{k=0}^{+\infty} c_{2k} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} (|y|^2+|\zeta|^2-1)}{(1+|y|^2+|\zeta|^2)^{5+2k}} dy
-\sum_{k=1}^{+\infty} c_{2k-1} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy
\epsilonnd{eqnarray*}
Since $I^p_q=\displaystyle \int_0^{\infty} \frac{ \rho^p}{(1+\rho)^q}d\rho$, $q>p+1$, does satisfy the relations:
\begin{equation} \lambdabdabel{Ipq}
I^p_{q+1}=\frac{q-p-1}{q}I^p_q\:,\qquad I^{p+1}_q=\frac{p+1}{q-p-2} I^p_q,
\epsilonnd{equation}
through the change of variable $\rho^2=\lambdabdambda t$, $\lambdabdambda=1+|\zeta|^2$, in polar coordinates we have that
\begin{eqnarray} \lambdabdabel{1748}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{5+2k}} dy
&=&
\pi \lambdabdambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{5+2k}
=\pi \frac{3+k-\frac{n}{n+1}}{4+2k}\lambdabdambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{4+2k}\nonumber\\
&=&\frac{3+k-\frac{n}{n+1}}{2(2+k)(1+|\zeta|^2)}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy
\epsilonnd{eqnarray}
and
\begin{eqnarray} \lambdabdabel{1818}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}-2+2k} }{(1+|y|^2+|\zeta|^2)^{2+2k}} dy
&=&
\pi \lambdabdambda^{\frac{n}{n+1}-2-k} I^{\frac{n}{n+1}-1+k}_{2+2k}
=\pi \frac{(2+2k)(3+2k)}{(k+\frac{n}{n+1})(2+k-\frac{n}{n+1})}\lambdabdambda^{\frac{n}{n+1}-2-k} I^{\frac{n}{n+1}+k}_{4+2k}\nonumber\\
&=&\frac{(2+2k)(3+2k)}{(k+\frac{n}{n+1})(2+k-\frac{n}{n+1})}(1+|\zeta|^2)
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy
\epsilonnd{eqnarray}
Inserting \epsilonqref{1748} and \epsilonqref{1818} into $I$, we get that
\begin{eqnarray*}
I&=&
\sum_{k=0}^{+\infty} c_{2k} \left(1-\frac{3+k-\frac{n}{n+1}}{(2+k)(1+|\zeta|^2)}\right)\left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy\\
&&-\sum_{k=1}^{+\infty} c_{2k-1} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy\\
&=&
\sum_{k=1}^{+\infty} \left[\frac{2(3+2k)c_{2k-2}}{k+\frac{n}{n+1}} \left(\frac{1+k}{2+k-\frac{n}{n+1}}-\frac{1}{1+|\zeta|^2}\right)\left(\begin{array}{c} 2k-2 \\ k-1 \epsilonnd{array}\right) (1+|\zeta|^2)-c_{2k-1} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^2 \right]\tildemes\\
&&\tildemes |\zeta|^{2k-2} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy.
\epsilonnd{eqnarray*}
Since $2(3+2k)c_{2k-2} \left(\begin{array}{c} 2k-2 \\ k-1 \epsilonnd{array}\right)=k
c_{2k-1} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right)$ for all $k \geq 1$, setting $\beta_k=c_{2k-1} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right) |\zeta|^{2k-2} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy$ we deduce that
\begin{eqnarray*}
I&=&
\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}} \left(\frac{1+k}{2+k-\frac{n}{n+1}}-\frac{1}{1+|\zeta|^2}\right) (1+|\zeta|^2)- |\zeta|^2 \right] \beta_k\\
&=&
\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}} \left(\frac{|\zeta|^2}{1+|\zeta|^2}-\frac{1}{(2+k)(n+1)-n} \right) (1+|\zeta|^2)- |\zeta|^2 \right] \beta_k<\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}}-1\right] |\zeta|^2 \beta_k<0.
\epsilonnd{eqnarray*}
In conclusion, we have shown that $I=f(|\zeta|)$ with $f<0$.
\noindent By the change of variable $y \to y+\zeta$ and the Taylor expansion of $(1-x)^{-5}$, arguing as before $K$ can be re-written as
\begin{eqnarray*}
K&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y-\zeta)}{(1+|y-\zeta|^2)^5}dy
=\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y-\zeta)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy.
\epsilonnd{eqnarray*}
By the previous expansions of $(y\bar \zeta+\bar y \zeta)^k$ and
\begin{eqnarray*}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2+2k}}{(1+|y|^2+|\zeta|^2)^{6+2k}}dy
&=&\pi \lambdabdambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+1+k}_{6+2k}=
\pi \frac{\frac{n}{n+1}+1+k}{5+2k} \lambdabdambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{5+2k}\\
&=&
\frac{\frac{n}{n+1}+1+k}{5+2k}\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy,
\epsilonnd{eqnarray*}
for symmetry $K$ reduces to
\begin{eqnarray*}
K&=&
\zeta \, \sum_{k=0}^{+\infty} \left[c_{2k+1} \frac{\frac{n}{n+1}+1+k}{5+2k}\left(\begin{array}{c} 2k+1 \\ k \epsilonnd{array}\right)
-c_{2k} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right)
\right]
|\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy.
\epsilonnd{eqnarray*}
Since $(1+k) c_{2k+1} \left(\begin{array}{c} 2k+1 \\ k \epsilonnd{array}\right)=(5+2k) c_{2k} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right)$ for all $k \geq 0$, we get that
\begin{eqnarray*}
K&=&
\zeta \, \sum_{k=0}^{+\infty} \frac{n}{(n+1)(1+k)} c_{2k} \left(\begin{array}{c} 2k \\ k \epsilonnd{array}\right)
|\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy.
\epsilonnd{eqnarray*}
In conclusion, we have shown that $K=g(|\zeta|)\zeta$ with $g>0$.
\noindent In order to determine the asymptotic behavior of $f$ and $g$ as $|\zeta|\to +\infty$, we will use complex analysis to get some integral representation of $f$ and $g$, see \epsilonqref{exprI-2J} and \epsilonqref{exprK}. We split $I$ as $I=J_1-2J_2$, and we compute separately the constants
$$J_1=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^4}dy\:,\quad J_2=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}dy.$$
Concerning $J_1$, we re-write it in polar coordinates as
\begin{eqnarray*}
J_1&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}}{(1+|y-\zeta|^2)^4}dy=\int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_0^{2\pi} \frac{d\theta}{(1+\rho^2+|\zeta|^2-\zeta\rho e^{-i\theta}-\overline{\zeta}\rho e^{i\theta})^4}\\
&=&- i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^3}{(\overline{\zeta}\rho)^4 (w^2-\frac{1+\rho^2+|\zeta|^2}{\overline{\zeta}\rho}w+\frac{\zeta^2}{|\zeta|^2})^4}dw.
\epsilonnd{eqnarray*}
Since $w^2- \displaystyle \frac{1+\rho^2+|\zeta|^2}{\overline{\zeta}\rho}w+\frac{\zeta^2}{|\zeta|^2}$ vanishes only at
$$w_\pm=\frac{1+\rho^2+|\zeta|^2\pm \sqrt{(1+\rho^2+|\zeta|^2)^2-4\rho^2|\zeta|^2}}{2 \overline{\zeta} \rho}$$
with $|w_-|<1<|w_+|$, by the Residue Theorem we have that
\begin{eqnarray*}
J_1= - i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^3}{(\overline{\zeta}\rho)^4 (w-w_-)^4(w-w_+)^4}dw=2\pi
\int_0^{\infty} \frac{ \rho^{\frac{2n}{n+1}+1} }{6 (\overline{\zeta}\rho)^4} \frac{d^3}{d w^3} \left[ \frac{w^3}{(w-w_+)^4}\right](w_-) d \rho .
\epsilonnd{eqnarray*}
A straightforward computation shows that
$$ \frac{d^3}{d w^3}\left[ \frac{w^3}{(w-w_+)^4}\right]=-6\frac{w^3+w_+^3+9w w_+(w+w_+)}{(w-w_+)^7},$$
and then
$$ \frac{d^3}{d w^3}\left[ \frac{w^3}{(w-w_+)^4}\right](w_-)=6 (\overline{\zeta}\rho)^4 \frac{(1+\rho^2+|\zeta|^2)[(1+\rho^2+|\zeta|^2)^2+6\rho^2 |\zeta|^2]}{[(1+\rho^2+|\zeta|^2)^2-4\rho^2 |\zeta|^2]^{\frac{7}{2}}}.$$
Recalling that $\lambdabdambda=1+|\zeta|^2$, through the change of variable $\rho \to \rho^2$ we finally get for $J_1$ the expression
\begin{eqnarray}\lambdabdabel{exprI}
J_1=\pi
\int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambdabdambda+\rho)[(\lambdabdambda+\rho)^2+6(\lambdabdambda-1)\rho]}{[(\lambdabdambda+\rho)^2-4(\lambdabdambda-1)\rho]^{\frac{7}{2}}}
d \rho.
\epsilonnd{eqnarray}
\noindent In a similar way, we first re-write $J_2$ as
\begin{eqnarray*}
J_2= i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^4}{(\overline{\zeta}\rho)^5 (w-w_-)^5 (w-w_+)^5}dw=
-2\pi \int_0^{+\infty} \frac{ \rho^{\frac{2n}{n+1}+1} }{24 (\overline{\zeta}\rho)^5} \frac{d^4}{d w^4} \left[ \frac{w^4}{(w-w_+)^5}\right](w_-) d\rho
\epsilonnd{eqnarray*}
in view of the Residue Theorem. Since
$$ \frac{d^4}{d w^4}\left[ \frac{w^4}{(w-w_+)^5}\right]=24\frac{w^4+w_+^4+16w w_+(w^2+w_+^2)+36 w^2 w_+^2}{(w-w_+)^9},$$
we get that
$$ \frac{d^4}{d w^4}\left[ \frac{w^4}{(w-w_+)^5}\right](w_-)=-24 (\overline{\zeta}\rho)^5 \frac{(1+\rho^2+|\zeta|^2)^4 +12 \rho^2|\zeta|^2 (1+\rho^2+|\zeta|^2)^2+42\rho^4 |\zeta|^4}{[(1+\rho^2+|\zeta|^2)^2-4\rho^2 |\zeta|^2]^{\frac{9}{2}}},$$
and then
\begin{eqnarray}\lambdabdabel{exprJ}
J_2=\pi
\int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambdabdambda+\rho)^4+12(\lambdabdambda-1)\rho (\lambdabdambda+\rho)^2+42(\lambdabdambda-1)^2\rho^2}{[(\lambdabdambda+\rho)^2-4(\lambdabdambda-1)\rho]^{\frac{9}{2}}}
d \rho.
\epsilonnd{eqnarray}
\noindent By \epsilonqref{exprI}-\epsilonqref{exprJ} we finally get that $f(|\zeta|)$ takes the form
\begin{eqnarray} \lambdabdabel{exprI-2J}
f=\pi \int_0^{\infty} \hspace{-0,3cm} \rho^{\frac{n}{n+1}} \frac{(\lambdabdambda+\rho)^5-2(\lambdabdambda+\rho)^4+2(\lambdabdambda-1) \rho (\lambdabdambda+\rho)^3 -24 \lambdabdambda (\lambdabdambda-1) \rho(\rho+1) (\lambdabdambda+\rho) -84(\lambdabdambda-1)^2 \rho^2 }{[(\lambdabdambda+\rho)^2-4(\lambdabdambda-1)\rho]^{\frac{9}{2}}}
d \rho
\epsilonnd{eqnarray}
where $\lambdabdambda=1+|\zeta|^2$.
\noindent Observe that for $\zeta=0$ (i.e. $\lambdabdambda=1$) we simply have that
\begin{equation} \lambdabdabel{1228}
f(0)=J_1-2J_2=\pi [I^{\frac{n}{n+1}}_4-2 I^{\frac{n}{n+1}}_5]=-\frac{2\pi}{2n+3}I^{\frac{n}{n+1}}_5
\epsilonnd{equation}
in view of \epsilonqref{Ipq}. By the change of variable $\rho=\lambdabdambda+\sqrt \lambdabdambda t$ and the Lebesgue Theorem we get that
\begin{eqnarray*}
\lambdabdambda^{-\frac{n}{n+1}} J_1=\pi \int_{-\sqrt \lambdabdambda}^\infty (1+\frac{t}{\sqrt \lambdabdambda})^{\frac{n}{n+1}} \frac{(2+\frac{t}{\sqrt \lambdabdambda})^3+6\frac{\lambdabdambda-1}{\lambdabdambda}
(1+\frac{t}{\sqrt \lambdabdambda})(2+\frac{t}{\sqrt \lambdabdambda})}{(t^2+4+\frac{4t}{\sqrt \lambdabdambda})^{\frac{7}{2}}} dt \to 20 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{7}{2}}}
\epsilonnd{eqnarray*}
and
\begin{eqnarray*}
\lambdabdambda^{-\frac{n}{n+1}} J_2&=&\pi \int_{-\sqrt \lambdabdambda}^\infty (1+\frac{t}{\sqrt \lambdabdambda})^{\frac{n}{n+1}} \frac{(2+\frac{t}{\sqrt \lambdabdambda})^4+12 \frac{\lambdabdambda-1}{\lambdabdambda}
(1+\frac{t}{\sqrt \lambdabdambda})(2+\frac{t}{\sqrt \lambdabdambda})^2 +42 (\frac{\lambdabdambda-1}{\lambdabdambda})^2
(1+\frac{t}{\sqrt \lambdabdambda})^2 }{(t^2+4+\frac{4t}{\sqrt \lambdabdambda})^{\frac{9}{2}}} dt\\
& \to & 106 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}}
\epsilonnd{eqnarray*}
as $|\zeta|\to +\infty$ (i.e. $\lambdabdambda \to +\infty$). Since $\displaystyle \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{7}{2}}}=\frac{14}{3} \displaystyle \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}},$ we get that
\begin{equation} \lambdabdabel{1902}
\frac{f(|\zeta|)}{|\zeta|^{\frac{2n}{n+1}}} \to -\frac{356}{3} \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}}
\epsilonnd{equation}
as $|\zeta|\to \infty$.
\noindent In a similar way, for $K$ we have that
\begin{eqnarray*}
K= i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^4(\rho w-\zeta)}{(\overline{\zeta}\rho)^5 (w-w_-)^5 (w-w_+)^5}dw=
-2\pi \int_0^{+\infty} \frac{\rho^{\frac{2n}{n+1}+1} }{24 (\overline{\zeta}\rho)^5} \frac{d^4}{d w^4} \left[ \frac{w^4(\rho w-\zeta)}{(w-w_+)^5}\right](w_-) d\rho
\epsilonnd{eqnarray*}
in view of the Residue Theorem. Since
$$ \frac{d^4}{d w^4}\left[ \frac{w^4(\rho w-\zeta)}{(w-w_+)^5}\right]=24\frac{5\rho w w_+[w^3+w_+^3+6ww_+(w+w_+)]-\zeta [w^4+w_+^4+16w w_+(w^2+w_+^2)+36 w^2 w_+^2]}{(w-w_+)^9},$$
we get that
$$ \frac{d^4}{d w^4}\left[ \frac{w^4(\rho w- \zeta)}{(w-w_+)^5}\right](w_-)=12 (\overline{\zeta}\rho)^5 \zeta \frac{(\lambdabdambda+\rho^2)^4+2\rho^2 (\lambdabdambda-6-5\rho^2) (\lambdabdambda+\rho^2)^2+6(\lambdabdambda-1)\rho^4 (2\lambdabdambda-7-5\rho^2)}{[(\lambdabdambda+\rho^2)^2-4(\lambdabdambda-1)\rho^2 ]^{\frac{9}{2}}},$$
and then
\begin{eqnarray}\lambdabdabel{exprK}
g(|\zeta|)=-\frac{\pi}{2}
\int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambdabdambda+\rho)^4+2\rho (\lambdabdambda-6-5\rho) (\lambdabdambda+\rho)^2+6(\lambdabdambda-1)\rho^2 (2\lambdabdambda-7-5\rho)}{[(\lambdabdambda+\rho)^2-4(\lambdabdambda-1)\rho]^{\frac{9}{2}}}
d \rho.
\epsilonnd{eqnarray}
So, we have that
\begin{equation} \lambdabdabel{1903}
g(0)=\frac{\pi}{2}(9I_5^{\frac{n}{n+1}}-10 I_6^{\frac{n}{n+1}})=\frac{3n+1}{2(n+1)} \pi I_5^{\frac{n}{n+1}}
\epsilonnd{equation}
in view of \epsilonqref{Ipq}, and, by the change of variable $\rho=\lambdabdambda+\sqrt \lambdabdambda t$ and the Lebesgue Theorem,
\begin{eqnarray} \lambdabdabel{1904}
\frac{g(|\zeta|)}{|\zeta|^{\frac{2n}{n+1}}} \to 17 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}}
\epsilonnd{eqnarray}
as $|\zeta|\to +\infty$, in view of
$$\int_{-\sqrt \lambdabdambda}^\infty (1+\frac{t}{\sqrt \lambdabdambda})^{\frac{n}{n+1}} \frac{
(2+\frac{t}{\sqrt \lambdabdambda})^4-2(1+\frac{t}{\sqrt \lambdabdambda}) (4+\frac{6+5 \sqrt \lambdabdambda t}{\lambdabdambda}) (2+\frac{t}{\sqrt \lambdabdambda})^2-6\frac{\lambdabdambda-1}{\lambdabdambda} (1+\frac{t}{\sqrt \lambdabdambda})^2 (3+\frac{7+5 \sqrt \lambdabdambda t}{\lambdabdambda})}{(t^2+4+\frac{4t}{\sqrt \lambdabdambda})^{\frac{9}{2}}} dt\to - \int_{\mathbb{R}} \frac{34\, dt}{(t^2+4)^{\frac{9}{2}}}$$
as $\lambdabdambda \to +\infty$.
\noindent {\bf Acknowledgements:} The work for this paper began while the second author was visiting the Departamento de Matem\'atica, Pontificia Universidad Cat\'olica de Chile (Santiago, Chile). Let him thank M. Musso and M. del Pino for their kind invitation and hospitality.
\epsilonnd{appendices}
\begin{thebibliography}{AAA}
\bibitem{AbSte} M. Abramowitz, I.A. Stegun, {\epsilonm Handbook of mathematical functions with formulas, graphs, and mathematical tables}, National Bureau of Standards Applied Mathematics Series, 55. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, 1964.
\bibitem{Abr}A.A. Abrikosov, {\epsilonm On the magnetic properties of superconductors of the second group}. Soviet Phys. JETP {\bf 5}
(1957), 1174--1182.
\bibitem{Ap} T.M. Apostol, {\epsilonm Modular functions and Dirichlet series in number theory. Second edition}, Graduate Texts in Mathematics, 41. Springer-Verlag, New York, 1990.
\bibitem{BaPa}S. Baraket, F. Pacard, {\epsilonm Construction of singular limits for a semilinear elliptic equation in dimension $2$}. Calc. Var. Partial Differential Equations {\bf 6} (1998), no. 1, 1--38.
\bibitem{Bog}E.B. Bogomolnyi, {\epsilonm The stability of classical solutions}. Sov. J. Nucl. Phys. {\bf 24} (1976), 449--454.
\bibitem{CY} L.A. Caffarelli, Y. Yang, \epsilonmph{Vortex condensation in
the Chern-Simons-Higgs model: an existence theorem}. Comm. Math.
Phys. {\bf 168} (1995), no. 2, 321--336.
\bibitem{ChI}D. Chae, O. Imanuvilov, \epsilonmph{The existence of non-topological multivortex solutions in
the relativistic self-dual Chern-Simons theory}. Comm. Math. Phys.
{\bf 215} (2000), no. 1, 119--142.
\bibitem{CFL}H. Chan, C.C. Fu, C.-S. Lin, \epsilonmph{Non-topological multivortex solution
to the selfdual Chern-Simons-Higgs equation}. Comm. Math. Phys.
{\bf 231} (2002), 189--221.
\bibitem{CLW}C.C. Chen, C.-S. Lin, G. Wang, \epsilonmph{Concentration phenomena of two-vortex solutions in a Chern-Simons model}, Ann. Sc. Norm. Sup. Pisa Cl. Sci. (5) {\bf 3} (2004), 367--397.
\bibitem{CHMY}X. Chen, S. Hastings, J.B. McLeod, Y. Yang, {\epsilonm A nonlinear elliptic equations arising from gauge
field theory and cosmology}. Proc. R. Soc. Lond. A {\bf 446}
(1994), no. 1928, 453--478.
\bibitem{ChO}X. Chen, Y. Oshita, \epsilonmph{An application of the modular function in nonlocal variational problems}, Arch. Ration. Mech. Anal. {\bf 186} (2007), no. 1, 109--132.
\bibitem{Ch}K. Choe, {\epsilonm Asymptotic behavior of condensate solutions in the Chern-Simons-Higgs theory}. J. Math. Phys. {\bf 48} (2007), no. 10, 103501, 17 pp.
\bibitem{DDeMW} J. Davila, M. del Pino, M. Musso, J. Wei, {\epsilonm Singular limits of a two-dimensional boundary value problem arising in corrosion modelling}. Arch. Ration. Mech. Anal. {\bf 182} (2006), no. 2, 181--221.
\bibitem{DEM4} M. del Pino, P. Esposito, M. Musso, {\epsilonm Two-dimensional Euler flows with concentrated vorticities}. Trans. Amer. Math. Soc. {\bf 362} (2012), no. 12, 6381--6395.
\bibitem{DEM5} M. del Pino, P. Esposito, M. Musso, {\epsilonm Linearized theory for entire solutions of a singular Liouvillle equation}. Proc. Amer. Math. Soc. {\bf 140} (2012), no. 2, 581--588.
\bibitem{dkm} M. del Pino, M. Kowalczyk, M. Musso, {\epsilonm Singular limits in Liouville-type equations}. Calc. Var. Partial Differential Equations {\bf 24} (2005), no. 1, 47--81.
\bibitem{DJLPW}W. Ding, J. Jost, J. Li, X. Peng, G. Wang, {\epsilonm Self duality equations for Ginburg-Landau and Seiberg-Witten type functionals with $6^{\hbox{th}}$ order potential}. Comm. Math. Phys. {\bf 217} (2001), 383--407.
\bibitem{DJLW2}W. Ding, J. Jost, J. Li, G. Wang, {\epsilonm An analysis of the two-vortex case in the Chern-Simons-Higgs model}. Calc. Var. Partial Differential Equations {\bf 7} (1998), 87--97.
\bibitem{D}G. Dunne, \epsilonmph{Selfdual Chern-Simons theories}, Lecture Notes in Physics
Monograph Series, 36. Springer-Verlag, Heidelberg, 1995.
\bibitem{EsFi}P. Esposito, P. Figueroa, {\epsilonm Singular mean field equations on compact Riemann surfaces}, arXiv:1210.6162.
\bibitem{EGP}P. Esposito, M. Grossi, A. Pistoia, {\epsilonm On the existence of blowing-up solutions for a mean field equation}. Ann. IHP Analyse Non Lin{\'e}aire {\bf 22} (2005), no. 2 , 227--257.
\bibitem{Fi}P. Figueroa, \epsilonmph{Singular limits for
Liouville-type equations on the flat two-torus}, Calc. Var. Partial Differential Equations (2013), doi 10.1007/s00526-012-0594-0.
\bibitem{H}J. Han, \epsilonmph{Existence of topological multivortex solutions in self dual
gauge theory}. Proc. Roy. Soc. Edinburgh {\bf 130} (2000),
1293--1309.
\bibitem{HKP}J. Hong, Y. Kim, P.Y. Pac, {\epsilonm Multivortex solutions of the abelian Chern-Simons-Higgs theory}.
Phys. Rev. Lett. {\bf 64} (1990), no. 19, 2230--2233.
\bibitem{JW}R. Jackiw, E.J. Weinberg, {\epsilonm Self-dual Chern-Simons vortices}.
Phys. Rev. Lett. {\bf 64} (1990), no. 19, 2234--2237.
\bibitem{JaTa}A. Jaffe, C. Taubes, {\epsilonm Vortices and monopoles}, Progress in Physics, 2. Birkh\"auser, Boston, 1980.
\bibitem{LiWa}C.-S. Lin, C.-L. Wang, \epsilonmph{Elliptic functions, Green functions and the mean field equations on tori}. Ann. of Math. (2) {\bf 172} (2010), no. 2, 911--954.
\bibitem{LinYan}C.-S. Lin, S. Yan, {\epsilonm Bubbling solutions for relativistic abelian Chern-Simons
model on a torus}. Comm. Math. Phys. {\bf 297} (2010), 733--758.
\bibitem{LinYan1}C.-S. Lin, S. Yan, {\epsilonm Existence of Bubbling Solutions for Chern-Simons Model on a Torus}. Arch. Ration. Mech. Anal. {\bf 207} (2013), no. 2, 353-392.
\bibitem{Nol} M. Nolasco, {\epsilonm Nontopological $N$-vortex condensates for the self-dual Chern-Simons theory}. Comm. Pure Appl. Math. {\bf 56} (2003), no. 12, 1752--1780.
\bibitem{NoTa3}M. Nolasco, G. Tarantello, {\epsilonm Double vortex condensates in the Chern-Simons-Higgs theory}. Calc. Var. Partial Differential Equations {\bf 9} (1999), 31--91.
\bibitem{SY1}J. Spruck, Y. Yang, {\epsilonm The existence of non-topological solutions in the self-dual Chern-Simons theory}. Comm. Math. Phys. {\bf 149} (1992), 361--376.
\bibitem{SY2}J. Spruck, Y. Yang, {\epsilonm Topological solutions in the self-dual Chern-Simons theory: existence
and approximation}. Ann. IHP Analyse Non Lin\'eaire {\bf 12}
(1995), no. 1, 75--97.
\bibitem{T}G. Tarantello, {\epsilonm Multiple condensate solutions for the Chern-Simons-Higgs theory}. J. Math. Phys. {\bf 37} (1996), 3769--3796.
\bibitem{Tbook}G. Tarantello, {\epsilonm Self-dual gauge field theories. An analytical approach}, Progress in Nonlinear Differential Equations and their Applications, 72. Birkhäuser, Boston, 2008.
\bibitem{Taubes}C. Taubes, {\epsilonm Arbitrary N-vortex solutions for the
first order Ginzburg-Landau equations}. Comm. Math. Phys. {\bf 72}
(1980), 277--292.
\bibitem{tHo}G. 't Hooft, {\epsilonm A property of electric and magnetic flux in nonabelian gauge theory}. Nuclear Phys. B {\bf 153} (1979), 141--160.
\bibitem{RWa}R. Wang, {\epsilonm The existence of Chern-Simons vortices}. Comm. Math. Phys. {\bf 137} (1991), 587--597.
\bibitem{Ybook}Y. Yang, {\epsilonm Solitons in field theory and nonlinear analysis}, Springer Monographs in Mathematics. Springer-Verlag, New York, 2001.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
math
|
\begin{document}
\sloppy{}
\let\WriteBookmarks\relax
\def1{1}
\def.001{.001}
\shorttitle{Efficient and Effective Local Search for the SUKP and BMCP}
\shortauthors{Zhu et~al.}
\author[1]{Wenli Zhu}
\author[2]{Liangqing Luo}
\address[1]{School of Statistics, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China; Email: zzhuwenli@163.com}
\address[2]{School of Statistics, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China; Corresponding author, Email: llq6429@163.com}
\title [mode = title]{Efficient and Effective Local Search for the Set-Union Knapsack Problem and Budgeted Maximum Coverage Problem}
\begin{abstract}
The Set-Union Knapsack Problem (SUKP) and Budgeted Maximum Coverage Problem (BMCP) are two closely related variant problems of the popular knapsack problem. Given a set of weighted elements and a set of items with nonnegative values, where each item covers several distinct elements, these two problems both aim to find a subset of items that maximizes an objective function while satisfying a knapsack capacity (budget) constraint. We propose an efficient and effective local search algorithm called E2LS for these two problems. To our knowledge, this is the first time that an algorithm has been proposed for both of them. E2LS trade-offs the search region and search efficiency by applying a proposed novel operator ADD$^*$ to traverse the refined search region. Such a trade-off mechanism allows E2LS to explore the solution space widely and quickly. The tabu search method is also applied in E2LS to help the algorithm escape from local optima. Extensive experiments on a total of 168 public instances with various scales demonstrate the excellent performance of the proposed algorithm for both the SUKP and BMCP.
\end{abstract}
\iffalse
\begin{highlights}
\item Propose an efficient and effective local search called E2LS for the SUKP and BMCP.
\item Investigate for the first time an algorithm for solving both of these two problems.
\item Propose to improve the search efficiency by refining the search region.
\item Propose an effective operator ADD$^*$ to traverse the refined search region.
\item Experimental results demonstrate the superiority of the proposed algorithm.
\end{highlights}
\fi
\begin{keywords}
Set-union knapsack problem \sep Budgeted maximum coverage problem \sep Local search \sep Tabu search
\end{keywords}
\maketitle
\iffalse
\subsection*{Abstract}
The Set-Union Knapsack Problem (SUKP) and Budgeted Maximum Coverage Problem (BMCP) are two closely related variant problems of the popular knapsack problem. Given a set of weighted elements and a set of items with nonnegative values, where each item covers several distinct elements, these two problems both aim to find a subset of items that maximizes an objective function while satisfying a knapsack capacity (budget) constraint. We propose an efficient and effective local search algorithm called E2LS for these two problems. To our knowledge, this is the first time that an algorithm has been proposed for both of them. E2LS trade-offs the search region and search efficiency by applying a proposed novel operator ADD$^*$ to traverse the refined search region. Such a trade-off mechanism allows E2LS to explore the solution space widely and quickly. The tabu search method is also applied in E2LS to help the algorithm escape from local optima. Extensive experiments on a total of 168 public instances with various scales demonstrate the excellent performance of the proposed algorithm for both the SUKP and BMCP.
\subsection*{Keywords}
Set-union knapsack problem; Budgeted maximum coverage problem; Local search; Tabu search
\fi
\section{Introduction}
\label{Sec_Intro}
The Set-Union Knapsack Problem (SUKP)~\citep{Goldschmidt1994} and Budgeted Maximum Coverage Problem (BMCP)~\citep{Khuller1999} are two closely related NP-hard combinatorial optimization problems. Let $I = \{i_1,...,i_m\}$ be a set of $m$ items where each item $i_j, j \in \{1,...,m\}$ has a nonnegative value $v_j$, $E = \{e_1,...,e_n\}$ be a set of $n$ elements where each element $e_k, k \in \{1,...,n\}$ has a nonnegative weight $w_k$, and $C$ is the capacity of a given knapsack in the SUKP (or the budget in the BMCP). The items and elements are associated by a relation matrix $R \in \{0,1\}^{m \times n}$, where $R_{jk} = 1$ indicates that $e_k$ is covered by $i_j$, otherwise $R_{jk} = 0$. The SUKP aims to find a subset $S$ of $I$ that maximizes the total value of the items in $S$, at the same time the total weight of the elements covered by the items in $S$ does not exceed the capacity $C$. The SUKP can be stated formally as follows.
\begin{equation}
\label{eq_f}
\text{Maximize}~~f(S) = \sum\nolimits_{j \in \{j | i_j\in S\}}v_j,
\end{equation}
\begin{equation}
\label{eq_W}
\text{Subject to}~~W(S) = \sum\nolimits_{k \in \{k | R_{jk} = 1, i_j \in S\}}w_k \leq C.
\end{equation}
For the BMCP, the goal is to find a subset $S$ of $I$ that maximizes the total weight of the elements covered by the items in $S$, while the total value of the items in $S$ does not exceed the capacity (budget) $C$. The BMCP can be stated formally as follows.
\begin{equation}
\text{Maximize}~~W(S),
\end{equation}
\begin{equation}
\text{Subject to}~~f(S) \leq C.
\end{equation}
Obviously, the SUKP and BMCP can be transferred to each other by swapping the optimization objective and constraint objective. Both the SUKP and BMCP are computationally challenging and have many real-world applications, such as flexible manufacturing~\citep{Goldschmidt1994}, financial decision making~\citep{Khuller1999,Kellerer2004}, data compression~\citep{Yang2016}, software defined network optimization~\citep{Kar2016}, project investment~\citep{Wei2019}, etc.
Exact and approximation algorithms are two kinds of methods for the SUKP and BMCP. For example, exact algorithms for the SUKP based on dynamic programming~\citep{Arulselvan2014} and linear integer programming~\citep{Wei2019}, and greedy approximation algorithms for the SUKP~\citep{Taylor2016} and BMCP~\citep{Khuller1999}. These algorithms can all theoretically guarantee the quality of their solutions, but the exact algorithms are hard to scale to large instances, and the approximation algorithms are hard to yield high-quality results.
Heuristic algorithms such as population-based algorithms and local search algorithms are more practical than exact and approximation algorithms. Population-based methods usually use bio-inspired metaheuristic operations among the population, so as to find excellent individuals. He et al.~\citep{He2018} first proposed a binary artificial bee colony algorithm for the SUKP. Later on, a swarm intelligence-based algorithm~\citep{Ozsoydan2019} was proposed to solve the SUKP. There are also some hybrid algorithms for the SUKP that combine population-based methods with local search to improve the performance. For example, Lin et al.~\citep{Lin2019} combined binary particle swarm optimization with tabu search, Dahmani et al.~\citep{Dahmani2020} combined swarm optimization with local search operators, Wu and He~\citep{Wu2020} proposed a hybrid Jaya algorithm for the SUKP. Population-based methods have attracted lots of attention. However, these methods are more complex to design than local search algorithms, and the performance of existing population-based methods is not as good as the state-of-the-art local search methods for the SUKP.
For the local search methods, the I2PLS algorithm~\citep{Wei2019} is the first local search method for the SUKP, which is based on tabu search. Later on, some better tabu search methods were proposed~\citep{Wei2021KBTS,Wei2021MSBTS}. Wei and Hao~\citep{Wei2021MSBTS} tested these tabu search algorithms on SUKP instances with no more than 1,000 items and elements. Recently, Zhou et al.~\citep{Zhou2021} proposed an efficient local search algorithm called ATS-DLA for the SUKP, and tested the algorithm on SUKP instances with up to 5,000 items and elements. For the BMCP, Li et al.~\citep{Li2021} proposed the first local search method. Zhou et al.~\citep{Zhou2022} proposed a local search algorithm based on a partial depth-first search tree.
Local search algorithms have obtained excellent results for solving the SUKP and BMCP. However, there are still some disadvantages of the existing local search algorithms for these two problems. The I2PLS~\citep{Wei2019}, KBTS~\citep{Wei2021KBTS}, MSBTS~\citep{Wei2021MSBTS}, and PLTS~\citep{Li2021} algorithms all tend to find the best neighbor solution of the current solution in each iteration. However, their search neighborhood contains lots of low-quality moves, which may reduce the algorithm efficiency. The search region of the ATS-DLA algorithm~\citep{Zhou2021} is small, which may make the algorithm hard to escape from some local optima. The VDLS algorithm~\citep{Zhou2022} has a wide and deep search region. However, VDLS does not allow the current solution worse than the previous one, which may restrict the algorithm's search ability. In summary, these local search methods can not trade-off the efficiency and search region well.
To handle this issue, we propose an efficient and effective local search algorithm, called E2LS, for both the SUKP and BMCP. To the best of our knowledge, this is the first time that an algorithm has been proposed to solve the SUKP and BMCP simultaneously. E2LS uses a random greedy algorithm to generate the initial solution, and an effective local search method to explore the solution space. E2LS restricts the items that can be removed from or added into the current solution to improve the search efficiency. In this way, E2LS can refine the search region by abandoning low-quality candidates. The local search operator in E2LS then traverses the refined search region to find high-quality moves. Thus E2LS can explore the solution space widely and quickly, so as to find high-quality solutions. Moreover, the tabu search method in~\citep{Wei2021MSBTS} is used in E2LS to prevent the algorithm from getting stuck in local optima. Indeed, as we have shown in this work, our proposed E2LS algorithm significantly outperforms the state-of-the-art heuristic algorithms for both the SUKP and BMCP.
The main contributions of this work are as follows.
\begin{itemize}
\item We propose an efficient and effective local search algorithm called E2LS for the SUKP and BMCP. We investigate for the first time an algorithm for solving both of these two problems.
\item E2LS trade-offs the search region and search efficiency well. E2LS restricts the items that can be removed from or added into the current solution, so as to abandon low-quality moves and refine the search region. The proposed operator ADD$^*$ can traverse the refined search region, so as to explore the solution space widely and efficiently.
\item The mechanism that trade-offs the search region and efficiency, as well as the method of traversing the refined search region, could be applied to other combinatorial optimization problems.
\item Extensive experiments demonstrate that E2LS significantly outperforms the state-of-the-art algorithms for both the SUKP and BMCP. In particular, E2LS provides four new best-known solutions for the SUKP, and 27 new best-known solutions for the BMCP.
\end{itemize}
\section{The Proposed E2LS Algorithm}
\label{Sec_Method}
This section introduces the proposed E2LS algorithm. We first present the components of E2LS, including the random greedy initialization method and the searching methods, then present the main process of E2LS. Finally, we discuss the advantages of E2LS over other state-of-the-art local search algorithms for the SUKP and BMCP. Note that the proposed E2LS algorithm can be used to solve both the SUKP and BMCP. This section mainly introduces its SUKP version. The BMCP version can be obtained simply by swapping the optimization objective and constraint objective of the SUKP version.
Before introducing our method, we first present several essential definitions used in the E2LS algorithm.
\textbf{Definition 1. (Additional Weight)} Note that $W(S)$ (see Eq. \ref{eq_W}) is the total weight of the elements covered by the items in $S$. Let $AW(S,i_j)$ to be the additional weight of an item $i_j$ to a solution $S$. If $i_j \notin S$, the additional weight $AW(S,i_j) = W(S \cup \{i_j\}) - W(S)$ represents the increase of the total weight of the covered elements caused by adding $i_j$ into $S$. Otherwise, $AW(S,i_j) = W(S) - W(S \backslash \{i_j\})$ represents the decrease of the total weight of the covered elements caused by removing $i_j$ from $S$.
\textbf{Definition 2. (Value-weight Ratio)} The value-weight ratio of an item $i_j$ to a solution $S$ is defined as $R_{vw}(S,i_j) = v_j / AW(S,i_j)$, which is the ratio of the value of $i_j$ to the addition weight of $i_j$ to $S$. Obviously, an item with a larger value-weight ratio to a solution $S$ is a better candidate item of $S$~\citep{Khuller1999,Zhou2021,Zhou2022}.
\subsection{Random Greedy Initialization Method}
We propose a simple construction method to generate the initial solution for E2LS. The procedure of the initialization method is shown in Algorithm \ref{alg_init}.
\begin{algorithm}[t]
\caption{Random\_Greedy($t$)}
\label{alg_init}
\LinesNumbered
\KwIn{Sampling times $t$}
\KwOut{Solution $S$}
Initialize $S \leftarrow \emptyset$\;
\While{TRUE}{
Initialize feasible candidates $FC \leftarrow \emptyset$\;
\For{$j \leftarrow 1 : n$}{
\lIf{$i_j \in S$}{\textbf{continue}}
\lIf{$AW(S,i_j) = 0$}{$S \leftarrow S \cup \{i_j\}$}
\lIf{$AW(S,i_j) + W(S) \leq C$}{$FC \leftarrow FC \cup \{i_j\}$}
}
\lIf{$FC = \emptyset$}{\textbf{break}}
\Else{
$M \leftarrow 0$\;
\For{$j \leftarrow 1 : t$}{
$i_r \leftarrow$ a random item in $FC$\;
\If{$R_{vw}(S,i_r) > M$}{$M \leftarrow R_{vw}(S,i_r), i_b \leftarrow i_r$}
}
$S \leftarrow S \cup \{i_b\}$\;
}
}
\textbf{return} $S$\;
\end{algorithm}
The algorithm starts with an empty solution $S$, and repeats to add items into $S$ until no more items can be added into $S$ (line 8). In each loop, each item $i_j \notin S$ with $AW(S,i_j) = 0$ will be added into $S$ (line 6). Such an operation can increase $f(S)$ (see Eq. \ref{eq_f}) without increasing $W(S)$. When there is at least one item that adding into $S$ will result in a feasible solution, i.e., the feasible candidates $FC \neq \emptyset$, the algorithm applies the probabilistic sampling strategy~\citep{Cai2015,Zheng2021} to select the item to be added. Specifically, the algorithm first random samples $t$ items with replacement in $FC$ (lines 11-12), then selects to add the item with the maximum value-weight ratio (lines 13-15).
We set the parameter $t$ to be $\sqrt{max\{m,n\}}$ as the algorithm MSBTS~\citep{Wei2021MSBTS} does. This setting can help the E2LS algorithm yield high-quality and diverse initial solutions. In particular, we analyze the influence of $t$ on the performance of E2LS in experiments. The results show that E2LS is very robust and not sensitive to the parameter $t$. Even with a random initialization method (i.e., $t = 1$), E2LS also has excellent performance.
\subsection{Searching Methods in E2LS}
The local search method in E2LS is the main improvement of E2LS to other local search algorithms for the SUKP and BMCP. In E2LS, we propose an efficient and effective local search operator to explore the solution space. We also apply the solution-based tabu search method in~\citep{Wei2021MSBTS} to avoid getting stuck in local optima. This subsection first describes how to represent the tabu list, then introduces the local search process.
\subsubsection{Tabu List Representation}
The tabu search method in~\citep{Wei2021MSBTS} uses three hash vectors $H_1,H_2,H_3$ to represent the tabu list. The length of each vector is set to $L$ ($L = 10^8$ by default). The three hash vectors are initialized to 0, indicating that no solution is prohibited by the tabu list. Each solution $S$ is corresponding to three hash values $h_1(S),h_2(S),h_3(S)$. A solution $S$ is prohibited (i.e., in the tabu list) if $H_1[h_1(S)] \wedge H_2[h_2(S)] \wedge H_3[h_3(S)] = 1$. See the details for calculating the hash values of a solution below.
For an instance with $m$ items, a weight matrix $\mathcal{W} \in \mathbf{N}^{\{3 \times m\}}$ is calculated as follows. First, let $\mathcal{W}_{lj} = \lfloor j^{\gamma_l} \rfloor, l = (1,2,3), j = (1,...,m)$, where $\gamma_1,\gamma_2,\gamma_3$ are set to 1.2, 1.6, 2.0, respectively. Then, random shuffle each of the three rows of $\mathcal{W}$. Given a solution $S$ that can be represented by $(y_1,...,y_m)$ where $y_j = 1$ if $i_j \in S$, and $y_j = 0$ otherwise. The hash values $h_l(S), l = (1,2,3)$ can be calculated by $h_l(S) = (\sum_{j=1}^{m}{\lfloor \mathcal{W}_{lj} \times y_j \rfloor) ~\text{mod}~ L}$.
\subsubsection{Local Search Process}
The procedure of the local search method in E2LS is shown in Algorithm \ref{alg_LS}. The search operator first removes an item from the input solution $S$ (line 6), and then uses the proposed operator ADD$^*$ (Algorithm \ref{alg_add}) to add items into the resulting solution $S'$ (line 8). The best solution that is not in the tabu list found during the search process is represented by $S_b$ (line 9).
\begin{algorithm}[t]
\caption{Local\_Search($S,r_{num},a_{num}$)}
\label{alg_LS}
\LinesNumbered
\KwIn{Input solution $S$, maximum size of the candidate set of the items to be removed $r_{num}$, maximum size of the candidate set of the items to be added $a_{num}$}
\KwOut{Output solution $S$}
$S_b \leftarrow \emptyset$\;
$U \leftarrow$ a set of items in $S$\;
Sort the items in $U$ in ascending order of their value-weight ratios to $S$\;
\For{$j \leftarrow 1 : \text{min}\{r_{num},|U|\}$}{
$i \leftarrow$ the $j$-th item in $U$\;
$S' \leftarrow S \backslash \{i\}$\;
\If{$S'$ is in the tabu list}{
$S' \leftarrow$ ADD$^*$($S',\emptyset,a_{num}$)\;
\lIf{$f(S') > f(S_b)$}{$S_b \leftarrow S'$}
}
}
\textbf{return} $S_b$\;
\end{algorithm}
\begin{algorithm}[t]
\caption{ADD$^*$($S,S_b,a_{num}$)}
\label{alg_add}
\LinesNumbered
\KwIn{Input solution $S$, best solution found in this step $S_b$, maximum size of the candidate set of the items to be added $a_{num}$}
\KwOut{Output solution $S$}
\While{TRUE}{
Initialize feasible candidates $FC \leftarrow \emptyset$\;
\For{$j \leftarrow 1 : m$}{
\lIf{$i_j \in S$}{\textbf{continue}}
\If{$AW(S,i_j) = 0$}{
\If{$S \cup \{i_j\}$ is not in the tabu list}{
$S \leftarrow S \cup \{i_j\}$\;
\lIf{$f(S) > f(S_b)$}{$S_b \leftarrow S$}
}
}
\ElseIf{$AW(S,i_j) + W(S) \leq C$}{$FC \leftarrow FC \cup \{i_j\}$}
}
\lIf{$FC = \emptyset$}{\textbf{return} $S_b$}
}
Sort the items in $FC$ in descending order of their value-weight ratios to $S$\;
\For{$j \leftarrow 1 : \text{min}\{a_{num},|FC|\}$}{
$i \leftarrow$ the $j$-th item in $FC$\;
\lIf{$S \cup \{i\}$ is in the tabu list}{\textbf{continue}}
$S' \leftarrow S \cup \{i\}$\;
\lIf{$f(S') > f(S_b)$}{$S_b \leftarrow S'$}
$S' \leftarrow$ ADD$^*$($S',S_b,a_{num}$)\;
}
\textbf{return} $S_b$\;
\end{algorithm}
As shown in Algorithm \ref{alg_add}, the function ADD$^*$ tries to add items into the input solution $S$ recursively (line 18) until no more items can be added into $S$ (line 11). The best solution that is not in the tabu list found during the process is represented by $S_b$ (lines 8 and 17). The algorithm first calculates the set of the feasible candidate items $FC$ of $S$ (lines 1-11). During the process, each item $i_j \notin S$ with $AW(S,i_j) = 0$ will be added into $S$ if the resulting solution is not in the tabu list (lines 5-7). After that, the algorithm traverses part of the set $FC$ to add an item $i$ into $S$ (lines 13-14), and then calls the function ADD$^*$ with the input solution $S \cup \{i\}$ to continue the search (line 18). During the process, the solutions in the tabu list are prohibited (line 15).
The proposed function ADD$^*$ is powerful and actually has a wide search region. The reasons are as follows. Suppose we set $a_{num} = m$, ADD$^*$ can traverse all the combinations of the candidate items by the recursive process, i.e., ADD$^*$ can find the optimal solution corresponding to its input partial solution when the items in its input solution are fixed. In particular, the function ADD$^*$ can be regarded as an exact solver for an instance with $m$ items if we call ADD$^*$($\emptyset,\emptyset,m$). However, an exhaustive search must be inefficient.
In order to improve the efficiency of the local search process, the low-quality moves (candidate items) should not be considered during the search process. To yield high-quality results, the items with large value-weight ratios to the current solution should be added into (or be kept in) it, and the items with small value-weight ratios should be removed from (or not be added into) it. Therefore, in Algorithm \ref{alg_LS}, the items that can be removed from $S$ are restricted by a parameter $r_{num}$. That is, only the top min$\{r_{num},|S|\}$ items with minimum value-weight ratios to $S$ can be removed from $S$ (lines 3-4). In Algorithm \ref{alg_add}, only the top min$\{a_{num},|FC|\}$ items with maximum value-weight ratios to $S$ can be added into $S$ (lines 12-13). By applying this strategy, the search region can be refined significantly, i.e., the number of items considered by ADD$^*$ is reduced from $|FC|$ to min$\{a_{num},|FC|\}$, and the efficiency can be improved greatly. Also, the operator ADD$^*$ does not traverse all the moves, but only traverses the refined search region.
Moreover, our proposed ADD$^*$ operator can add multiple items into the current solution $S$, which can not be regarded as the same as the combination of multiple continuous $ADD$ operators in~\citep{Wei2019,Wei2021KBTS} or $flip$ operators (on items not in $S$) in ~\citep{Wei2021MSBTS,Li2021}. For example, the ADD$^*$ operator adds items $i_1,i_2$, and $i_3$ into $S$, while the best neighbor solution of $S$ find by the commonly used $ADD$ operator might chooses to add item $i_4$. In summary, ADD$^*$ effective because it can traverse various combinations of the items in the refined search region that can be added into the current solution.
\subsubsection{Main Framework of E2LS}
The main process of E2LS is shown in Algorithm \ref{alg_E2LS}. E2LS first initializes the three hash vectors $H_1,H_2,H_3$ to 0 (line 2), and calls the Random\_Greedy function (Algorithm \ref{alg_init}) to generate the initial solution $S$ (line 3). Then, E2LS calls the Local\_Search function (Algorithm \ref{alg_LS}) to explore the solution space until the cut-off time is reached (lines 5-12). During the iterations, the Random\_Greedy function will be called again when the Local\_Search function with the input solution $S$ can not find a solution that is not in the tabu list (line 7-8), i.e., the output solution of the Local\_Search function is $\emptyset$. The solution $S$ obtained in each iteration will be added into the tabu list by setting $H_1[h_1(S)] \wedge H_2[h_2(S)] \wedge H_3[h_3(S)]$ to 1 (line 11). The best solution found so far is represented by $S_b$ (line 12).
\begin{algorithm}[t]
\caption{E2LS($I,T_{max},t,r_{num},a_{num}$)}
\label{alg_E2LS}
\LinesNumbered
\KwIn{Instance $I$, cut-off time $T_{max}$, sampling times $t$, maximum size of the candidate set of the items to be removed $r_{num}$, maximum size of the candidate set of the items to be added $a_{num}$}
\KwOut{Output solution $S$}
Read instance $I$\;
Initialize $H_1,H_2,H_3$ to 0\;
$S \leftarrow$ Random\_Greedy($t$)\;
Initialize $S_b \leftarrow \emptyset$\;
\While{the cut-off time $T_{max}$ is not reached}{
$S' \leftarrow$ Local\_Search($S,r_{num},a_{num}$)\;
\If{$S' = \emptyset$}{
$S \leftarrow$ Random\_Greedy($t$)\;
\textbf{continue}\;
}
$S \leftarrow S'$\;
\lFor{$j \leftarrow 1 : 3$}{$H_j[h_j(S)] \leftarrow 1$}
\lIf{$f(S) > f(S_b)$}{$S_b \leftarrow S$}
}
\textbf{return} $S_b$\;
\end{algorithm}
\subsubsection{Advantages of E2LS}
This subsection discusses the main advantages of our proposed E2LS over the state-of-the-art local search algorithms for the SUKP and BMCP.
The first category of local search methods includes the I2PLS~\citep{Wei2019}, KBTS~\citep{Wei2021KBTS}, MSBTS~\citep{Wei2021MSBTS} algorithms for the SUKP, and the PLTS~\citep{Li2021} algorithm for the BMCP. These algorithms are all based on tabu search. Their common shortcoming is that their search neighborhoods contain lots of low-quality moves. While E2LS refines the search neighborhood according to the value-weight ratios of the items. Thus, E2LS shows much better performance and higher efficiency than these algorithms.
The second category is the ATS-DLA algorithm~\citep{Zhou2021}, which also uses the tabu search method and is efficient for large scale SUKP instances. However, ATS-DLA can only remove or add one item in each iteration, which leads to a relatively small and overrefined search region, and may make it hard to escape from local optima in some cases. While E2LS can add multiple items in each iteration by traversing the refined search region, and considering various combinations of the items. Therefore, E2LS can explore the solution space wider and deeper than ATS-DLA, so as to find higher-quality solutions.
The third category is the VDLS algorithm~\citep{Zhou2022}, which is not based on tabu search but a partial depth-first search method. VDLS has a large search region, and can yield significantly better solutions than the PLTS algorithm~\citep{Li2021}. However, VDLS does not allow the solution to get worse during the search process, and the initial solution generated by VDLS is fixed. Such mechanisms limit the search ability of the algorithm. E2LS does not restrict that the current solution must be better than the previous one, and applies the tabu search method to avoid getting stuck in local optima. Moreover, the random greedy initialization method in E2LS can generate high-quality and diverse initial solutions for E2LS. Thus E2LS also shows significantly better performance than VDLS.
\section{Computational Results}
\label{Sec_Result}
This section presents experimental results and analyses, before which we first introduce the benchmark instances used in the experiments and the experimental setup.
\subsection{Benchmark Instances}
We tested E2LS on a total of 78 public SUKP instances with at most 5,000 items or elements, and 90 public BMCP instances with no more than 5,200 items or elements. The 78 tested SUKP instances can be divided into three sets as follows.
\begin{itemize}
\item \textit{Set I:} This set contains 30 instances with 85 to 500 items or elements. This set was proposed in~\citep{He2018} and widely used in~\citep{He2018,Ozsoydan2019,Lin2019,Wei2019,Dahmani2020,Wu2020,Wei2021KBTS,Wei2021MSBTS}.
\item \textit{Set II:} This set contains 30 instances with 585 to 1,000 items or elements. This set was proposed in~\citep{Wei2021KBTS} and used in~\citep{Wei2021KBTS,Wei2021MSBTS}.
\item \textit{Set III:} This set contains 18 instances with 850 to 5,000 items or elements. This set was proposed in~\citep{Zhou2021}.
\end{itemize}
Each instance in \textit{Sets I}, \textit{II}, and \textit{III} is characterized by four parameters: the number of items $m$, the number of elements $n$, the density of the relation matrix $\alpha = (\sum_{j=1}^m{\sum_{k=1}^n{R_{ij}}})/(mn)$, and the ratio of knapsack capacity $C$ to the total weight of the elements $\beta = C/\sum_{k=1}^n{w_k}$. The name of a SUKP instance consists of these four parameters. For example, \textit{sukp\_85\_100\_0.10\_0.75} represents a SUKP instance with 85 items, 100 elements, $\alpha = 0.10$, and $\beta = 0.75$.
The 90 tested BMCP instances can be divided into three sets as follows.
\begin{itemize}
\item \textit{Set A:} This set contains 30 instances with 585 to 1,000 items or elements. This set was proposed in~\citep{Li2021} and used in~\citep{Li2021,Zhou2022}.
\item \textit{Set B:} This set contains 30 instances with the number of items or elements ranging from 1,000 to 1,600. This set was proposed in~\citep{Zhou2022}.
\item \textit{Set C:} This set contains 30 instances with the number of items or elements ranging from 4,000 to 5,200. This set was proposed in~\citep{Zhou2022}.
\end{itemize}
Each instance in \textit{Set A} is characterized by four parameters: the number of items $m$, the number of elements $n$, the knapsack capacity (budget) $C$, and the density of the relation matrix $\alpha$. The instances in \textit{Sets B} and \textit{C} have more complex structures than those in \textit{Set A}. Zhou et al.~\citep{Zhou2022} created these instances by first randomly grouping the items and elements, then deciding the connection of them according to a parameter $\rho$ that represents the density of the relation matrix of each group. Each instance in \textit{Sets B} and \textit{C} is characterized by parameters $m,n,C$ and $\rho$.
\subsection{Experimental Setup}
We first introduce the baseline algorithms we selected. For the SUKP, we select some of the state-of-the-art heuristic algorithms, including ATS-DLA~\citep{Zhou2021}, MSBTS~\citep{Wei2021MSBTS}, KBTS~\citep{Wei2021KBTS}, and I2PLS~\citep{Wei2019}, as the baseline algorithms. For the BMCP, we select the heuristic algorithms PLTS~\citep{Li2021} and VDLS~\citep{Zhou2022} as the baseline algorithms. To our knowledge, they are also the only two heuristic algorithms for the BMCP.
The E2LS algorithm and the baseline algorithms were implemented in C++ and compiled by g++. All the experiments were performed on a server using an Intel速 Xeon速 E5-1603 v3 2.80GHz CPU, running Ubuntu 16.04 Linux operation system. The parameters in E2LS include the sampling times $t$, and the maximum size of the candidate set of the items to be removed/added $r_{num}/a_{num}$. We tuned these parameters according to our experience. The default value of $t$ is set to $\sqrt{max\{m,n\}}$. For the parameters $r_{num}$ and $a_{num}$, we set the default values of them to be different when solving the SUKP and BMCP, since the properties of these two problems are different. Specifically, for the SUKP, we set the default values as $r_{num} = 2$ and $a_{num} = 2$. For the BMCP, we set $r_{num} = 5$ and $a_{num} = 5$. The detailed reasons why their default values are different when solving the SUKP and BMCP, as well as the influence of these parameters on the performance of E2LS, are presented in Section \ref{Sec_para}.
We set the cut-off time for each algorithm to be 500 seconds for the instances in \textit{Set I} as~\citep{Wei2021MSBTS} did, 1,000 seconds for the instances in \textit{Sets II} and \textit{III} as~\citep{Wei2021MSBTS,Zhou2021} did, 600 seconds for the instances in \textit{Set A} as~\citep{Li2021} did, and 1,800 seconds for the instances in \textit{Sets B} and \textit{C} as~\citep{Zhou2022} did. Each instance was calculated 10 independent times for each algorithm.
\input{table-SUKP-split}
\input{table-BMCP-split}
\input{table-SUKP2}
\input{table-BMCP2}
\subsection{Comparison with the baselines}
The comparison results of E2LS with the baseline algorithms on the three sets of SUKP instances are presented in Tables \ref{table-SUKP1}, \ref{table-SUKP2}, and \ref{table-SUKP3}\footnote{Note that there are two instances with the same name but the different structures in \textit{Set II} and \textit{Set III}.}, respectively. The comparison results of E2LS with the baseline algorithms on the three sets of BMCP instances are presented in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3}, respectively. In these tables, unique best results are appeared in bold, while equal best results are appeared in italic. Tables \ref{table-SUKP1}, \ref{table-SUKP2}, and \ref{table-SUKP3} compare the best solution (objective value), average solution, standard deviations over 10 runs (S.D.), and the average run times in seconds (to obtain the best solution in each run) of each involved algorithm. Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3} present the best solution and average solution of each involved algorithm, coupled with the standard deviations and average run times of E2LS. The results of PLTS and VDLS in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3} are from~\citep{Zhou2022} (they used the similar machine as ours, Intel速 Xeon速 E5-2650 v3 2.30GHz).
Moreover, in order to show the advantage of E2LS over the baselines more clearly, we summarize the comparative results between E2LS and each baseline algorithm in Tables \ref{table-SUKP-compare} and \ref{table-BMCP-compare}. Columns \#Wins, \#Ties, and \#Losses indicate the number of instances for which E2LS obtains a better, equal, and worse result than the compared algorithm according to the best solution and average solution indicators.
As the results shown in Tables \ref{table-SUKP1}, \ref{table-SUKP2}, \ref{table-SUKP3}, and \ref{table-SUKP-compare}, E2LS does not lose to any baseline algorithm according to the best solution indicator. Actually, E2LS can obtain the best-known solution for all the 78 tested SUKP instances. E2LS also has good stability and robustness, since the average solutions obtained by E2LS are also excellent and the standard deviations of E2LS are very small. In particular, E2LS obtains three new best-known solutions for instances \textit{sukp\_1000\_850\_0.15\_0.85}, \textit{sukp\_3000\_2850\_0.10\_0.75}, and \textit{sukp\_3000\_2850\_0.15\_0.85}\footnote{The result 9565 of instance \textit{sukp\_1000\_850\_0.15\_0.85} can also be obtained by MSBTS, but has not been reported in the literature. The result 9207 of instance \textit{sukp\_5000\_4850\_0.10\_0.75} has been reported in~\citep{Zhou2021}.}.
Moreover, there are no baseline algorithms that can solve the SUKP instances with various scales well. For example, ATS-DLA shows good performance for the instances in \textit{Sets II} and \textit{III}, since its search operator that considers only one item per step makes it efficient for (relatively) large instances. However, ATS-DLA is not good at solving the instances in \textit{Set I}, because its search operator has a small search region, while a large search region is more important than a high search efficiency for small instances. Algorithms KBTS and MSBTS show good performance for the instances in \textit{Set I}, because they traverse all the possible moves per step and thus have large search regions. However, KBTS shows worse performance than MSBTS on \textit{Set II}, since the solution-based tabu search is better than the attributed-based tabu search for the SUKP~\citep{Wei2021MSBTS}. Also, both KBTS and MSBTS show worse performance than ATS-DLA on \textit{Set III}, since their operators that traverse all the possible moves are inefficient for large instances.
While the proposed E2LS shows excellent performance on all three sets of SUKP instances, because E2LS extracts the advantages of the baseline algorithms and improves their disadvantages. On the one hand, E2LS refines the search region according to the value-weight ratios of the items, so as to abandon the low-quality moves and improve the search efficiency. Thus E2LS works well on \textit{Sets II} and \textit{III}. On the other hand, the proposed operator ADD$^*$ can traverse the refined search region and add the best feasible combination of multiple candidate items into the current solution per iteration, which leads to a large search region. Thus E2LS also works well on \textit{Set I}. In summary, E2LS is efficient and effective because it can trade-off the search region and search efficiency well.
As for the comparison results on the BMCP instances shown in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, \ref{table-BMCP3}, and \ref{table-BMCP-compare}, the advantage of E2LS over the BMCP baseline algorithms PLTS and VDLS is more obvious than that of E2LS over the SUKP baselines. This might be because the SUKP is more well-studied than the BMCP in terms of heuristic methods. Specifically, E2LS does not lose to the BMCP baselines according to either the best or the average solution indicator, and obtains 7/20 new best-known solutions for the instances in \textit{B}/\textit{C}. Moreover, the average solutions of E2LS are equal to its best solutions on all the tested instances except \textit{bmcp\_5000\_4800\_0.5\_7000}. And E2LS can obtain such excellent results within very small run times for most of the tested instances. The results indicate again the excellent stability, robustness, and efficiency of the proposed E2LS algorithm.
\begin{figure*}
\caption{Analyses on the influence of parameters $r_{num}
\label{fig_Para-I}
\label{fig_Para-A}
\label{fig_Para-II}
\label{fig_Para-B}
\label{fig_Para-III}
\label{fig_Para-C}
\label{fig_Para}
\end{figure*}
\input{table-SUKP-para}
\input{table-BMCP-para}
\subsection{Analyses on the parameters}
\label{Sec_para}
We then analyze the influence of the parameters including $r_{num}$, $a_{num}$, and $t$ on the performance of E2LS by comparing E2LS with its variants on each instance set.
We first compare the E2LS algorithm with different values of $r_{num}$ and $a_{num}$. We tested five pairs of parameters. They are $r_{num},a_{num} = 1,2,5,10,m$ respectively. The results are shown in Figure \ref{fig_Para}, that compares the average values of the best and average solutions of each algorithm on all the instances in each instance set.
The results in Figure \ref{fig_Para} show that with the increase of the parameter values from 1 to $m$, the performance of E2LS first increases and then decreases. This result indicates that balancing the search efficiency and the search region is reasonable and necessary. When we set small values to the parameters (e.g., 1 for the SUKP, 1 or 2 for the BMCP), E2LS can explore the search space very quickly, but it might be easy to get stuck in some local optima. When we set large values to the parameters (e.g., 10 or $m$), E2LS can explore the search space widely and deeply, but the low efficiency might also make it hard to escape from local optima.
We can also observe that the properties of the SUKP and BMCP are different. For the SUKP, E2LS with small values of parameters are not good at solving the small instances in \textit{Set I}, and E2LS with large values of parameters are not good at solving the large instances in \textit{Sets II} and \textit{III}. This result is consistent with the results of the SUKP baseline algorithms, and can explain their performance. That is, algorithms KBTS and MSBTS with large and unrefined search regions are good at solving the small instances in \textit{Set I}, but do not work well on \textit{Set III}. In contrast, ATS-DLA with a small and overrefined search region works well on \textit{Sets II} and \textit{III}, but not on \textit{Set I}.
For the BMCP, the instances in \textit{Sets A} and \textit{B} prefer small parameter values to large ones. While the instances in \textit{Set C} prefer large parameter values to small ones. This might be because for the relatively small instances in \textit{Sets A} and \textit{B} (with 585 to 1,600 items or elements), low search efficiency is more likely to get stuck in local optima than a small search region. While the situation for the large instances in \textit{Set C} (with 4,000 to 5,200 items or elements) is on the contrary.
Moreover, the best settings of parameters when solving the SUKP and BMCP are different. The best settings of $r_{num}, a_{num}$ are 2 and 5 for the SUKP and BMCP, respectively. This is also because of the different properties of these two problems. For example, when solving the SUKP, items with zero additional weights will not be added into the feasible candidate set $FC$ (which will be refined by the parameter $a_{num}$), and will be added into the current solution directly by the ADD$^*$ operator if such a move is not prohibited by the tabu list (lines 5-8 in Algorithm \ref{alg_add}). While when solving the BMCP, there is no item with zero additional weights (values), since the value of each item is positive (in the BMCP, an item with zero values should always be contained in the solution). Therefore, if we set the parameter $a_{num}$ to be the same when solving the SUKP and BMCP, the ADD$^*$ operator for the SUKP can add more items than that for the BMCP. Thus it is reasonable to set larger parameter values for the BMCP than for the SUKP.
We further analyze the influence of the parameter $t$ on the performance of E2LS. We denote E2LS($t = 1$) and E2LS($t = 10$) as two variants of E2LS with the parameter $t$ equals to 1 and 10, respectively. Note that E2LS($t = 1$) actually generates the initial solution randomly. We compare these two variants with E2LS and the baseline algorithms. Tables \ref{table-SUKP-para} and \ref{table-BMCP-para} show the results of the algorithms for the SUKP and BMCP, respectively. The results are the average values of the best and average solutions, the standard deviations, and the average run times of each algorithm on all the instances in each instance set.
As the results show, both E2LS($t = 1$) and E2LS($t = 10$) can obtain competitive results with E2LS, and their performance is significantly better than the baseline algorithms for both the SUKP and BMCP. The result indicates that the local search method in E2LS has excellent stability and robustness, as it can yield excellent results with various initial solutions, even if they are generated randomly. Moreover, E2LS slightly outperforms E2LS($t = 1$) and E2LS($t = 10$), indicating that higher-quality initial solutions lead to better performance, and the random greedy initialization method in E2LS is effective.
\section{Conclusion}
\label{Sec_Conclusion}
This paper proposes an efficient and effective local search algorithm called E2LS for the SUKP and BMCP problems. To our knowledge, this is the first time that an algorithm has been proposed for these two closely related problems. The E2LS algorithm can explore the solution space efficiently by refining the search region, i.e., abandoning the low-quality moves. The proposed ADD$^*$ operator in E2LS can traverse the refined search region and provide high-quality moves quickly. As a result, E2LS trade-offs the search region and efficiency well, which leads to an excellent performance. Such a trade-off mechanism and the approach of traversing the refined search region could be applied to various combinatorial optimization problems.
Extensive experiments on 78 public SUKP instances and 90 public BMCP instances with various scales demonstrate the superiority of the proposed algorithm. In particular, E2LS provides four new best-known solutions for the SUKP, and 27 new best-known solutions for the BMCP.
\section*{Declarations of interest}
None.
\end{document}
|
math
|
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Entry'
db.create_table('press_links_entry', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('author', self.gf('django.db.models.fields.related.ForeignKey')(related_name='press_links_entry_related', to=orm['auth.User'])),
('title', self.gf('django.db.models.fields.CharField')(max_length=255)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=255, db_index=True)),
('pub_date', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
('status', self.gf('django.db.models.fields.IntegerField')(default=2)),
('excerpt', self.gf('tinymce.models.HTMLField')(blank=True)),
('source', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
))
db.send_create_signal('press_links', ['Entry'])
# Adding M2M table for field site on 'Entry'
db.create_table('press_links_entry_site', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('entry', models.ForeignKey(orm['press_links.entry'], null=False)),
('site', models.ForeignKey(orm['sites.site'], null=False))
))
db.create_unique('press_links_entry_site', ['entry_id', 'site_id'])
# Adding model 'Link'
db.create_table('press_links_link', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('link', self.gf('django.db.models.fields.CharField')(max_length=255)),
('link_text', self.gf('django.db.models.fields.CharField')(max_length=255)),
('entry', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['press_links.Entry'])),
))
db.send_create_signal('press_links', ['Link'])
def backwards(self, orm):
# Deleting model 'Entry'
db.delete_table('press_links_entry')
# Removing M2M table for field site on 'Entry'
db.delete_table('press_links_entry_site')
# Deleting model 'Link'
db.delete_table('press_links_link')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'press_links.entry': {
'Meta': {'ordering': "['-pub_date']", 'object_name': 'Entry'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'press_links_entry_related'", 'to': "orm['auth.User']"}),
'excerpt': ('tinymce.models.HTMLField', [], {'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'site': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'press_links_entry_related'", 'symmetrical': 'False', 'to': "orm['sites.Site']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255', 'db_index': 'True'}),
'source': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'press_links.link': {
'Meta': {'object_name': 'Link'},
'entry': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['press_links.Entry']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'link': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'link_text': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['press_links']
|
code
|
Hakuba is located in the Northern Japanese Alps about an hour away from Nagano City. With 11 ski resorts and an average snowfall of 11m the Hakuba Valley offers a wide variety of skiing and boarding. Boasting over 960 hectares, 137km of piste, 200+courses, 125 lifts and 9 terrain parks the valley has something to offer for anyone. If the resorts aren’t enough there is a strong back country culture in Hakuba with several English tour operators available in the valley.
As an international destination Hakuba offers easy access with bullet trains from Tokyo to Nagano and a short bus trip to Hakuba and a multitude of busing and private taxi options direct from both Narita and Haneda airports. The city has a very strong traditional Japanese feel, but also offers many services in English and Chinese, catering to a large number of non-Japanese tourists.
|
english
|
पहल वेलफेयर सोसायटी में होली का त्योहार मनाया गया पूरे जोश ,उत्साह और उंमग के साथ। खबर हिमाचल से
!! राशिफल २६ जून २०२० शुक्रवार !!
बद्दी । किराना दुकानदार और अंडों की रेहड़ी लगाने वाले के खिलाफ एफआईआर ।
होम सामान्य पहल वेलफेयर सोसायटी में होली का त्योहार मनाया गया पूरे जोश ,उत्साह...
चम्बा ! पहल वेलफेयर सोसायटी में सोमवार को होली का त्योहार पूरे जोश ,उत्साह और उंमग के साथ मनाया गया। पर्यावरण संरक्षण और जल बचाने के उद्देश्य से आयोजित राधा कृष्ण के संग फूलों की होली का आयोजन किया गया। इस मौके पर सोसाइटी अध्यक्ष अर्चना प्लाह भी मौजूद रहे। समारोह में प्लाह संस्थान के प्रशिक्षु ,शिक्षकों तथा कर्मचारियों ने भी जमकर फूलों की होली खेली।
सास्कृंतिक प्रस्तुतियों में शास्त्रीय ,पारंपरिक और बालीवुड धुनों पर आधारित गीतों पर पूरा परिसर मिलकर झूमने लगा। इस मौके पर अर्चना प्लाह ने आजकल के वातावरण में खेली जाने वाली होली एवं पौराणिक होली के महत्व को समझाया। बताया कि हर साल होली में लोग होलिका दहन के नाम पर पेड़ों की कटाई कर पर्यावरण को नुकसान पहुंचाते हैं जोकि गलत है।
पिछला लेखडिग्री कॉलेज ने सोमवार को ही होली का त्योहार एक दूसरों को गुलाल लगाकर मनाया।
अगला लेखराजकीय महाविद्यालय चम्बा में मनाया गया वार्षिक पारितोषिक वितरण समारोह।
पं रामेश्वर शर्मा - ज्योतिषाचार्य - जुलाई ९, २०२०
मेष का आज का राशिफल (९ जुलाई, २०२०) आपके पति/पत्नि की सेहत तनाव और फ़िक्र की वजह बन सकती है। आर्थिक रुप से आज आप...
सुंदरनगर ! बिहारी प्रवासी मजदुर घर वापसी के लिए निकले सड़कों...
शिमला ! बखीरना पुल ढहने के मामले की जांच के लिए...
सिरमौर । ७५ वर्षीय महिला से दुष्कर्म के आरोपी को पुलिस...
शिमला ! राजस्थान में देशभर के पत्रकार भाग लेंगे , राष्ट्रीय...
कांगड़ा जिला में कर्फ्यू में ढील का समय पहले की तरह...
|
hindi
|
मोटापे से बचना है तो स्नेक्स को कहें बाय-बाय
न्यूयार्क। अधिकतर लोग ऑफिस में काम करते हुए थक जाने पर खुद को तरोताजा करने के लिए चाय, कॉफी और स्नेक्स का सेवन करते हैं। लेकिन एक नए शोध से पता चला है कि स्नेक्स बार-बार खाने की लालसा बढ़ाते हैं, जिससे लोग मोटापे का शिकार हो सकते हैं। कई कंपनियां भी उत्पादकता और मनोबल बढ़ाने के लिए अपने कर्मचारियों के लिए मुफ्त स्नैक्स की पेशकश करती हैं, जो कर्मचारियों में मोटापा बढ़ाती है।
अमेरिका की सेंट जोसेफ यूनिवर्सिटी द्वारा किए गए एक अध्ययन के नतीजे यह बताते हैं कि मुफ्त पेय पदार्थ और नाश्ते का प्रावधान कर्मचारियों में इसे बार-बार लेने की मानसिकता को बढ़ावा देता है।
अध्ययन में यह भी पता चला है कि कर्मचारी स्नेक्स को पेय पदार्थो से ज्यादा तरजीह देते हैं। सेंट जोसेफ यूनिवर्सिटी में प्रोफेसर व मुख्य शोधकर्ता अर्नेस्ट बास्किन का कहना है कि यह काफी आश्चर्यजनक है कि स्नेक्स और पेय पदार्थो के बीच का चुनाव स्नैक्स लेने की आदत में महत्वपूर्ण बदलाव लाता है। इस अध्ययन के दौरान पाया गया कि स्नेक्स का सेवन करने की आदतें पुरुषों में महिलाओं से अधिक होती है। इस अध्ययन के निष्कर्ष हाल ही में पत्रिका एपेटाइट में प्रकाशित हुए हैं।
|
hindi
|
More than 20 people were arrested Thursday on federal charges connecting them to a series of Medicaid and other health-care schemes that prosecutors say cheated taxpayers out of tens of millions of dollars.
In announcing five federal indictments against 12 people and related D.C. Superior Court charges against 13 people, U.S. Attorney Ronald C. Machen Jr. said federal law enforcement officials had pulled off the largest health-care fraud takedown “in the history of the District of Columbia.” Prosecutors, FBI agents and others had investigated the fraud for years — much of it allegedly emanating from corrupt operators of home-care agencies and personal-care assistants, he said — and uncovered a problem that they say has permeated a component of the city’s health-care system.
Those charged are accused of running separate — and sometimes competing — schemes to bill the government for in-home health-care services that were not provided. A Bowie woman also was accused of racking up more than $75 million in billings after she had been barred from being a part of federal health-care programs.
According to the indictments, the alleged swindlers — who included operators and workers at home-care and nurse-staffing agencies — recruited Medicaid recipients to claim that they needed expensive in-home health care, even coaching them on how to exaggerate their symptoms to persuade doctors to sign off on treatment plans. The accused would then pose as the Medicaid recipients’ personal-care assistants, having them sign time sheets while they billed the government, according to the indictments.
The Medicaid recipients, though, did not need in-home care, according to the indictments. Those charged merely gave them a cut of the proceeds — usually a few hundred dollars at a time — and pocketed the rest, according to the indictments.
Machen said the alleged swindlers were able to take “millions” from the District’s Medicaid program — which is mostly federally funded but run by the city — although investigators were still calculating the precise amount. He said law enforcement officials were partially inspired to investigate the fraud when they noticed an unusual increase in the amount of money being paid out for personal-care services.
In 2006, Machen said, 2,500 Medicaid beneficiaries billed the government $40 million for personal care services. By 2013, he said, 10,000 beneficiaries were billing the government for $280 million.
Prosecutors said that as part of the probe, they seized several luxury cars, including a 2013 Mercedes GL450 and a 2013 Porsche Panamera.
The investigation was not limited to networks of those billing for in-home services they did not provide. Prosecutors said they also charged the 51-year-old Bowie woman, Florence Bikundi, alleging that she hadmore than $75 million in D.C. Medicaid billings after she was barred from participating in federal health-care programs.
Bikundi, prosecutors said, used various names to hide the fact that her nursing licenses had been revoked and that her health-care companies were not eligible to receive Medicare or Medicaid payments. Machen said that investigators are still looking into whether Bikundi provided the services for which she billed the government.
Online court records did not list an attorney for Bikundi on Thursday afternoon, and efforts to locate family members were unsuccessful.
|
english
|
\begin{document}
\draft
\title{Long beating wavelength in the Schwarz-Hora effect}
\author{Yu. N. Morokov \cite{email}}
\address{
Institute of Computational Technologies, Siberian Branch of \\
the Russian Academy of Sciences, Novosibirsk 630090, Russia}
\maketitle
\begin{abstract}
The quantum-mechanical interpretation of the long-wavelength
spatial beating of the light intensity in the Schwarz-Hora effect
is discussed. A more accurate expression for the spatial period
has been obtained, taking into account the mode structure of the
laser field within the dielectric film. It is shown that the
discrepancy of more than 10$\%$ between the experimental and
theoretical results for the spatial period cannot be reduced by
using the existing models. More detailed experimental information
is necessary to clear up the situation.
\end{abstract}
\pacs{PACS numbers: 03.65.Pm, 03.30.+p, 78.70.-g 41.90.+e}
\narrowtext
In 1969, Schwarz and Hora \cite{1} reported the results of an
experiment in which a 50-keV beam of electrons passed through a
thin crystalline film of SiO$_2$, Al$_2$O$_3$, or SrF$_2$
irradiated with laser light. Electrons produced the usual
electron-diffraction pattern at a fluorescent target. However,
the diffraction pattern was also observed at a nonfluorescent
target \cite{1,2,3} (the Schwarz-Hora effect). In this case the
pattern was roughly of the same color as the laser light. The
effect was absent if the electrical vector of the polarized laser
light was parallel to the film surfaces. When changing the
distance between the thin crystalline film and the target, a
periodic change in the light intensity was observed with spatial
period of the order of centimeters \cite{2}. The Schwarz-Hora
effect was discussed extensively in the literature in the early
1970s. The latest review can be found in Ref. \cite{4}.
The reported quantitative results \cite{1,2,3,4} were obtained
for the films of about 1000 $\AA $ thickness. The films were
illuminated by a 10$^7$-W/cm$^2$ argon ion laser irradiation
($\lambda _p$= 4880 $\AA $) perpendicular to the electron beam of
about 0.4 $\mu $A current. These values will be used below for
numerical estimates.
The quantum-mechanical treatment of the problem was made in the
one-electron \cite{2,4,5,6,7,8} and many-electron \cite{9,10,11}
approximations. One problem unresolved up to now is connected
with the theoretical interpretation of the relatively high
intensity of the Schwarz-Hora radiation (at least of the order of
10$^{-10}$ W). The calculated radiated power turns out to be at
least 10$^3$ times smaller than the observed power
\cite{4,7,9,10,11,12}. The other problem is connected with the
strong dependence of the Schwarz-Hora radiation intensity on the
laser light polarization \cite{2,4,9}. An explanation of this
dependence is absent too. In the following discussion, we do not
consider these two problems.
In this Brief Report we consider only the more transparent
problem connected with the interpretation of the long-wavelength
spatial modulation of the Schwarz-Hora radiation
\cite{2,4,5,6,7,9,10,11,13,14}. The one-particle and
many-particle models lead to the same expression for the long
beating wavelength. At first sight, there is even a good
quantitative agreement with experiment \cite{14}. However, as we
shall see below, this agreement is accidental. Moreover, there is
the discrepancy of more than 10$\%$ that cannot be reduced on
the basis of the existing quantum theories.
Let the $z$ axis be directed along the incident electron beam.
The laser beam is along the $x$ axis. The electrical vector of
the laser light is in the $z$ direction. Electrons pass through
the dielectric slab restricted by the planes $z = -d$ and $z =
0$. We consider without loss of generalization only the central
outgoing electron beam (zeroth-order diffraction).
Usually the following assumptions are used: An electron interacts
with the light wave only within the slab; it interacts within the
slab only with the light wave; the spin effects can be neglected.
In the simplest case the light field within the slab and
incident electrons are represented by plane waves.
Using these assumptions, consider the origin of the
long-wavelength spatial modulation in the one-electron quantum
theory. The solution of the Klein-Gordon equation to first order
in the light field (see, for example, Refs. \cite{5,7}) gives the
following expression for the electron probability density for $z
> 0$:
\begin{eqnarray}
\rho (x,z,t) = \rho _0 \biggl\{ 1-\beta \sin \left[
\frac{z}{2\hbar } (2p_0-p_{1z}-p_{-1z})\right] \nonumber \\
\times \sin \left( \frac{\pi d}{2d_0}\right)
\cos \left[ kx - \omega t+\frac{z}{2\hbar }
(p_{1z}-p_{-1z})\right] \biggl\} .
\label{1}
\end{eqnarray}
Here $\rho _0$ is the probability density for the initial
incident electron beam and $\omega $ and $k$ denote the circular
frequency and the wave number of the light wave inside the slab.
The parameter $\beta $ is proportional to the amplitude of the
laser field and $d_0$ is the smallest optimum value of the slab
thickness. For the conditions of the Schwarz experiments, these
parameters are $\beta $= 0.35 (for $\alpha $-quartz) and $d_0$ =
1007 $\AA $. The $z$ components of the momentum $p_{nz}$ are
determined for free electrons of energy $E_n$ and momentum
${\bbox p}_n$ from the relativistic relationship
\begin{eqnarray}
E_n^2 = m^2c^4 + {\bbox p}_n^2c^2,
\label{2}
\end{eqnarray}
\begin{eqnarray*}
E_n = E_0 + n\hbar \omega , \quad
p_{nx} = n\hbar k, \quad
n = 0,\pm 1.
\end{eqnarray*}
Here $m$ is the electron mass.
The probability that an electron absorbs or emits a photon inside
the dielectric slab is a periodic function of the slab thickness.
This is indicated by the second sine term in Eq.\ (\ref{1}). The
experimental data on such dependence of the Schwarz-Hora
radiation are absent in the literature. The cosine term
represents the optical modulation of the electron beam. The
first sine term in Eq.\ (\ref{1}) is a function of the distance
$z$ between the slab and the target and represents the stationary
modulation of the electron probability density. On equating the
phase of this sine to 2$\pi z/\lambda _b$, we obtain the
expression for the spatial beating wavelength (the same
expression is obtained in the many-electron treatment
\cite{9,10,11})
\begin{eqnarray}
\lambda _b = \frac{4\pi\hbar }
{2p_0-p_{1z}-p_{-1z}}.
\label{3}
\end{eqnarray}
Taking into account Eq.\ (\ref{2}) and that the ratio $\hbar
\omega /E_0$ is very small, this expression can be rewritten as
\cite{7}
\begin{eqnarray}
\lambda _b = \lambda _{b0}\frac{1}{1-(\frac {v_0}{c})^2(1-n^2)}.
\label{4}
\end{eqnarray}
Here $n = k c/\omega $ is the refractive index of the dielectric
slab and
\begin{eqnarray}
\lambda _{b0} = 2\lambda _p\left( \frac{E_0}{\hbar \omega }
\right) \left( \frac{v_0}{c}\right) ^3.
\label{5}
\end{eqnarray}
It may be assumed that the quantity $E_0-mc^2$ = 50 keV (the
average energy of incident electrons) was sufficiently well fixed
in the Schwarz experiments. Therefore, the ratio of the initial
electron velocity to the velocity of light in vacuum is $v_0/c$ =
0.4127 and $E_0/\hbar \omega = 2.208\times 10^5$. Then
\begin{eqnarray}
\lambda _{b0} = 1.515 \enskip {\rm cm}.
\label{6}
\end{eqnarray}
In the literature, the following three experimental values for
quantity $\lambda _b$ are presented: 1.70 \cite{2}, 1.75
\cite{13}, and 1.73$\pm $0.01 \cite{14} cm.
The authors of Refs. \cite{2,13,14} did not specify for which of
the three above-mentioned dielectric materials these values had
been determined. Equation \ (\ref{4}) gives the largest value of
$\lambda _b$ for strontium fluoride, $\lambda _b=1.29$ cm. This
material has the smallest value of the refractive index ($n$ =
1.43) among the three materials used. As affirmed in Ref.
\cite{4}, the main material used in the experiments was SiO$_2$.
By using Eq.\ (\ref{4}), we obtain $\lambda _b=1.22$ cm for
$\alpha $-quartz. Thus it appears that the considered
quantum-mechanical model does not give the agreement with
experiment for $\lambda _b$.
The situation, however, can be somewhat improved. As noted in
Refs. \cite{6,15}, only one propagation mode of the light wave
TM$_0$ can be excited within the slab under the experimental
conditions considered. The corresponding wave field can be
represented by a superposition of two traveling plane waves,
propagating at angles $\pm \alpha $ to the $x$ axis. These waves
turn one into another upon total internal reflection at the slab
surfaces. The condition for the appearance of the next mode
TM$_1$ can be written as $d > \lambda _p /2\sqrt{n^2-1}$. For
$\alpha $-quartz it means $d > 2040\AA $.
In case the light field is represented by one TM mode, the
relativistic quantum-mechanical treatment can be carried out by
analogy with the previous case (see also Ref. \cite{15}). Such
treatment leads to the same sine term for the stationary spatial
modulation as that term in Eq.\ (\ref{1}). We obtain the
following expression for $\lambda _b$:
\begin{eqnarray}
\lambda _b = \lambda _{b0}\frac{1}{1-(\frac {v_0}{c})^2
(1-n^2\cos ^2\alpha )}.
\label{7}
\end{eqnarray}
This formula gives a better value for the spatially beating
wavelength, $\lambda _b = 1.47$ cm, for $\alpha $-quartz if we
suppose that the light field within the slab is represented by
the TM$_0$ mode. However, the condition for total internal
reflection, $n \cos \alpha > 1$, limits the possibility to
improve the agreement between the theory and experiment by using
the formula \ (\ref{7}). This implies that $\lambda _b = \lambda
_{b0}= 1.515$ cm is the upper limit, which cannot be exceeded by
any formal optimization of the parameters $n$ and $d$.
Formally, the values $\lambda _b = 1.70-1.75$ cm can be obtained
by using formula \ (\ref{7}) if we suppose that the dominant role
in the effect is played by some radiation mode. In this case the
laser light simply crosses the slab. However, the angles between
the input laser light and the slab surface must be very large,
$53-63^{\circ }$, in confrontation with the described
experimental conditions.
The wavelength $\lambda _b$ arises in the considered
quantum-mechanical models as a result of the beat among three
plane waves representing free electrons. These waves are
characterized by the quantum numbers $E_n$ and ${\bbox p}_n
(n=0,\pm 1)$. The values of $E_n$ and $p_{nx}$ are determined
uniquely by the conservation of energy and the $x$ component of
quasimomentum in the elementary act of the electron-photon
interaction inside the dielectric slab. Then the values of
$p_{nz}$ are determined by the relativistic relationship
\ (\ref{2}). These factors hold for both the one-particle and
many-particle considerations.
Thus the quantity $\lambda _b$ is determined by the simple but
fundamental propositions of the physical theory. Therefore, we
can conclude that the quantum models that use the electron
plane waves ("one-dimensional" in terms of Ref. \cite{16}) have
no the chance of resolving the discrepancy of more than 10$\%$
between theory and experiment for the quantity $\lambda _b$. This
statement remains valid even if we take into account some
uncertainty of the published experimental data on the parameters
$n$ and $d$.
An attempt to improve the agreement with experiment for
$\lambda _b$ has been made in Ref. \cite{14}. An expression
obtained in Ref. \cite{17} was used for a momentum density of a
light wave in a refracting medium. The agreement has been
obtained at the cost of repudiating the conservation of the $x$
component of quasimomentum in the electron-photon interaction
inside the slab. However, such a step is incorrect because the
slab length in the $x$ direction can be considered infinite for
the conditions of the Schwarz experiments. At the same time, as
noted in Ref. \cite{17}, the quasimomentum must be conserved in a
uniform medium. Finally, the formal agreement with experiment
obtained in Ref. \cite{14} for the case of the plain
light wave loses any sense for the light field represented by the
waveguide mode TM$_0$. Calculation shows that the angle $\alpha $
is sufficiently large ($\alpha = 46^{\circ }$ for $\alpha
$-quartz).
Another contradiction between the theory and experiment can be
added to the ones noted above. The Schwarz experiments definitely
indicate \cite{2,16} that there must be the maximum of the
Schwarz-Hora radiation intensity at the film surface $z$ = 0,
i.e., there must be the cosine instead of the first sine in
formula \ (\ref{1}). This problem was discussed in Ref.
\cite{16}. Then the more rigorous treatment by the same authors
\cite{11} has in fact confirmed that the theory gives the sine in
the dependence of the beating effect on the distance $z$. This is
in accordance also with Ref. \cite{18}. Thus this is one more
reliably established discrepancy between the theory and the
experiment.
In conclusion, the upper limit $\lambda _b$=1.515 cm has been
obtained for the theoretically permissible values of the
spatially beating wavelength for the conditions of the Schwarz
experiments. It does not seem possible to account for the large
discrepancy between this value and the experimental values
($\lambda _b^{\rm expt}=1.70-1.75$ cm) on the basis of the
existing theoretical models. If we add here the other problems
mentioned above (the radiation intensity, the dependence on laser
light polarization, and the initial phase of the spatial
beating), the situation becomes worse. To clear up the situation,
it is desirable to obtain more detailed experimental information,
which ought to include, for instance, the dependence of
$\lambda _b$ on the electron velocity $v_0$ and the refractive
index of the dielectric film. Unfortunately, the results of the
Schwarz experiments have not been reproduced by other groups up
to now. Since 1972 no reports on the results of further attempts
to repeat those experiments in other groups have appeared while
the failures of the initial such attempts have been explained by
Schwarz in Ref. \cite{3}.
\begin{references}
\bibitem[*]{email} Electronic address: yura@net.ict.nsc.ru
\bibitem{1} H. Schwarz and H. Hora, Appl. Phys. Lett. $\bf 15$,
349 (1969).
\bibitem{2} H. Schwarz, Trans. NY Acad. Sci. $\bf 33$, 150
(1971).
\bibitem{3} H. Schwarz, Appl. Phys. Lett. $\bf 20$, 148 (1972).
\bibitem{4} H. Hora and P. H. Handel, Adv. Electron. Electron
Phys. $\bf 69$, 55 (1987).
\bibitem{5} L. L. Van Zandt and J. W. Meyer, J. Appl. Phys.
$\bf 41$, 4470 (1970).
\bibitem{6} A. R. Hutson, Appl. Phys. Lett. $\bf 17$, 343 (1970).
\bibitem{7} D. A. Varshalovich and M. I. D'yakonov, Zh. Eksp.
Teor. Fiz. $\bf 60$, 89 (1971) [Sov. Phys. JETP $\bf 33$, 51
(1971)].
\bibitem{8} D. Marcuse, J. Appl. Phys. $\bf 42$, 2255 (1971).
\bibitem{9} C. Becchi and G. Morpurgo, Phys. Rev. D $\bf 4$, 288
(1971).
\bibitem{10} J. Kondo, J. Appl. Phys. $\bf 42$, 4458 (1971).
\bibitem{11} L. D. Favro and P. K. Kuo, Phys. Rev. A $\bf 7$, 866
(1973).
\bibitem{12} B. Ya. Zeldovich, Zh. Eksp. Teor. Fiz. $\bf 61$, 135
(1971) [Sov. Phys. JETP $\bf 34$, 70 (1972)].
\bibitem{13} H. Hora, Phys. Status Solidi B $\bf 42$, 131 (1970).
\bibitem{14} H. Hora, Phys. Status Solidi B $\bf 80$, 143 (1977).
\bibitem{15} J. Bae, S. Okuyama, T. Akizuki, and K. Mizuno, Nucl.
Instrum. Methods Phys. Res. A $\bf 331$, 509 (1993).
\bibitem{16} L.D. Favro, D.M. Fradkin, P.K. Kuo, and W.B.
Rolnick, Appl. Phys. Lett. $\bf 18$, 352 (1971).
\bibitem{17} R. Peierls, Proc. R. Soc. London, Ser. A $\bf 347$,
475 (1976).
\bibitem{18} H. J. Lipkin and M. Peshkin, J. Appl. Phys. $\bf 43$,
3037 (1972).
\end{references}
\end{document}
|
math
|
\begin{document}
\title[Behavior near the origin of $f'(u^\ast)$ in extremal solutions] {Behavior near the origin of $f'(u^\ast)$ in radial singular extremal solutions}
\author{Salvador Villegas}
\thanks{The author has been supported by the Ministerio de Ciencia, Innovaci\'on y Universidades of Spain PGC2018-096422-B-I00 and by the Junta de Andaluc\'{\i}a A-FQM187-UGR18.}
\address{Departamento de An\'{a}lisis
Matem\'{a}tico, Universidad de Granada, 18071 Granada, Spain.}
\email{svillega@ugr.es}
\begin{abstract}
Consider the semilinear elliptic equation $-\Delta u=\lambda f(u)$ in the unit ball $B_1\subset \mathbb{R}^N$, with Dirichlet
data $u|_{\partial B_1}=0$, where $\lambda\geq 0$ is a real parameter and $f$ is a $C^1$ positive, nondecreasing and convex function in $[0,\infty)$ such that
$f(s)/s\rightarrow\infty$ as $s\rightarrow\infty$. In this paper we study the behavior of $f'(u^\ast)$ near the origin when $u^\ast$, the extremal solution of the previous problem associated to $\lambda=\lambda^\ast$, is singular. This answers to an open problems posed by Brezis and V\'azquez \cite[Open problem 5]{BV}.
\end{abstract}
\maketitle
\section{Introduction and main results}
Consider the following semilinear elliptic equation, which has been extensively studied:
$$
\left\{
\begin{array}{ll}
-\Delta u=\lambda f(u)\ \ \ \ \ \ \ & \mbox{ in } \Omega \, ,\\
u>0 & \mbox{ in } \Omega \, ,\\
u=0 & \mbox{ on } \partial\Omega \, ,\\
\end{array}
\right. \eqno{(P_\lambda)}
$$
\
\noindent where $\Omega\subset\mathbb R^N$ is a smooth bounded domain,
$N\geq 1$, $\lambda\geq 0$ is a real parameter and the
nonlinearity $f:[0,\infty)\rightarrow \mathbb R$ satisfies
\begin{equation}\label{convexa}
f \mbox{ is } C^1, \mbox{ nondecreasing and convex, }f(0)>0,\mbox{
and }\lim_{u\to +\infty}\frac{f(t)}{t}=+\infty.
\end{equation}
It is well known that there exists a finite positive extremal
parameter $\lambda^\ast$ such that ($P_\lambda$) has a minimal
classical solution $u_\lambda\in C^0(\overline{\Omega})\cap C^2(\Omega)$ if $0<
\lambda <\lambda^\ast$, while no solution exists, even in the weak
sense, for $\lambda>\lambda^\ast$. The set $\{u_\lambda:\, 0<
\lambda < \lambda^\ast\}$ forms a branch of classical solutions
increasing in $\lambda$. Its increasing pointwise limit
$u^\ast(x):=\lim_{\lambda\uparrow\lambda^\ast}u_\lambda(x)$ is a
weak solution of ($P_\lambda$) for $\lambda=\lambda^\ast$, which
is called the extremal solution of ($P_\lambda$) (see
\cite{Bre,BV,Dup}).
The regularity and properties of extremal solutions depend
strongly on the dimension $N$, domain $\Omega$ and nonlinearity
$f$. When $f(u)=e^u$, it was proven that $u^\ast\in L^\infty
(\Omega)$ if $N<10$ (for every $\Omega$) (see \cite{CrR,MP}),
while $u^\ast (x)=-2\log \vert x\vert$ and $\lambda^\ast=2(N-2)$
if $N\geq 10$ and $\Omega=B_1$ (see \cite{JL}). There is an
analogous result for $f(u)=(1+u)^p$ with $p>1$ (see \cite{BV}).
Brezis and V\'azquez \cite{BV} raised the question of determining
the boundedness of $u^\ast$, depending only on the dimension $N$, for general smooth bounded domains $\Omega\subset\mathbb R^N$ and nonlinearities $f$ satisfying (\ref{convexa}). This was proven by Nedev \cite{Ne} when $N\leq3$; by Cabr\'e and Capella \cite{cc} when $\Omega=B_1$ and $N\leq 9$; by Cabr\'e \cite{ca4} when $N=4$ and $\Omega$ is convex; by the author \cite{yo4} when $N=4$; by Cabr\'e and Ros-Oton \cite{caro} when $N\leq 7$ and $\Omega$ is a convex domain “of double revolution”; by Cabr\'e, Sanch\'on, and Spruck \cite{css} when $N=5$ and $\limsup_{t\to\infty}f'(t)/f(t)^{1+\varepsilon}<+\infty$ for every $\varepsilon>0$. Finally, in a recent paper Cabr\'e, Figalli, Ros-Oton and Serra \cite{cfros} solved completely this question by proving that $u^\ast$ is bounded if $N\leq 9$.
Another question posed by Brezis and V\'azquez \cite[Open problem 5]{BV} for singular extremal solutions is the following: What is the behavior of $f'(u^\ast)$ near the singularities? Does it look like $C/r^2$?
This question is motivated by the fact that in the explicit examples $\Omega=B_1$ and $f(u)=(1+u)^p$, $p>1$ or $f(u)=e^u$ it is always $f'(u^\ast (r))=C/r^2$ for certain positive constant $C$, when the extremal solution $u^\ast$ is singular.
In this paper we give a negative answer to this question, by showing that, in the case in which $\Omega=B_1$ and $u^\ast$ is singular, we always have $\limsup_{r\to 0}r^2f'(u^\ast(r))\in (0,+\infty)$. However, it is possible to give examples of $f\in C^\infty ([0,+\infty ))$ satisfying (\ref{convexa}) for which $u^\ast$ is singular and $\liminf_{r\to 0}r^2f'(u^\ast(r))=0$. In fact, we exhibit a large family of functions $f\in C^\infty ([0,+\infty ))$ satisfying (\ref{convexa}) for which $u^\ast$ is singular and $f'(u^\ast)$ can have a very oscillating behavior.
\begin{theorem}\label{limsup}
Assume that $\Omega=B_1$, $N\geq 10$, and that $f$ satisfies (\ref{convexa}). Suppose that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded.
Then $\limsup_{r\to 0} r^2 f'(u^\ast (r))\in (0,\infty)$. Moreover
$$\frac{2(N-2)}{\lambda^\ast}\leq \limsup_{r\to 0} r^2 f'(u^\ast (r))\leq \frac{\lambda_1}{\lambda^\ast} ,$$
\noindent where $\lambda_1$ denotes the first eigenvalue of the linear problem $-\Delta v=\lambda v$ in $B_1\subset {\mathbb R}^N$ with Dirichlet conditions $v=0$ on $\partial B_1$.
\end{theorem}
\begin{theorem}\label{liminf}
Assume that $\Omega=B_1$, $N\geq 10$, and that $\varphi :(0,1)\rightarrow {\mathbb R^+}$ satisfies $\lim_{r\to 0} \varphi (r)=+\infty$. Then there exists $f\in C^\infty([0,+\infty))$ satisfying (\ref{convexa}) such that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded and
$$\liminf_{r\to 0} \frac{f'(u^\ast (r))}{\varphi (r)}=0.$$
\end{theorem}
Note that in the case $\varphi(r)=1/r^2$, we would obtain $\liminf_{r\to 0} r^2 f'(u^\ast (r))=0$. This answers negatively to \cite[Open problem 5]{BV}. In fact $r^2 f'(u^\ast (r))$ could be very oscillating, as the next result shows.
\begin{theorem}\label{oscillation}
Assume that $\Omega=B_1$, $N\geq 10$, and let $0\leq C_1\leq C_2$, where $C_2\in[2(N-2),(N-2)^2/4]$. Then there exists $f\in C^\infty([0,+\infty))$ satisfying (\ref{convexa}) such that the extremal solution $u^\ast$ of $(P_\lambda)$ is unbounded, $\lambda^\ast=1$ and
$$\liminf_{r\to 0} r^2 f'(u^\ast (r))=C_1, $$
$$\limsup_{r\to 0} r^2 f'(u^\ast (r))=C_2. $$
\end{theorem}
Note that if $C_1=C_2$, then the interval $[2(N-2),(N-2)^2/4]$ is optimal: $C_2\geq 2(N-2)$ by Theorem \ref{limsup}, while $C_1\leq (N-2)^2/4$ by Hardy's inequality.
\begin{theorem}\label{cualquiera}
Assume that $\Omega=B_1$, $N\geq 11$, and that $\Psi\in C(\overline{B_1}\setminus \{ 0\} )$ is a radially symmetric decreasing function satisfying
$$\frac{2(N-2)}{r^2}\leq \Psi(r) \leq \frac{(N-2)^2}{4 r^2}, \ \ \mbox{ for every } 0<r\leq 1.$$
Then there exist $f\in C^1([0,+\infty))$ satisfying (\ref{convexa}) such that $\lambda^\ast =1$ and
$$f'(u^\ast (x))=\Psi (x), \ \ \mbox{ for every } x\in \overline{B_1}\setminus\{ 0\}.$$
Moreover, this function $f$ is unique up to a multiplicative constant. That is, if $g$ is a function with the above properties, then there exists $\alpha>0$ such that $g=\alpha \, f(\cdot /\alpha)$ (whose extremal solution is $\alpha u^\ast$).
\end{theorem}
\section{Proof of the main results}
First of all, if $\Omega=B_1$, and $f$ satisfies (\ref{convexa}), it is easily seen by the Gidas-Ni-Nirenberg symmetry result that $u_\lambda$, the solution of $(P_\lambda)$, is radially decreasing for $0<\lambda<\lambda^\ast$. Hence, its limit $u^\ast$ is also radially decreasing. In fact $u_r^\ast(r)<0$ for all $r\in (0,1]$, where $u_r$ denotes the radial derivative of a radial function $u$. Moreover, it is immediate that the minimality of $u_\lambda$ implies its stability. Clearly, we can pass to the limit and obtain that $u^\ast$ is also stable, which means
\begin{equation}\label{inequa}
\int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq\int_{B_1} \lambda^\ast f'(u^\ast)\xi^2 \, dx
\end{equation}
\noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$.
On the other hand, differentiating $-\Delta u^\ast =\lambda^\ast f(u^\ast)$ with respect to $r$, we have
\begin{equation}\label{ahiledao}
-\Delta u_r^\ast=\left(\lambda^\ast f'(u^\ast) -\frac{N-1}{r^2}\right) u_r^\ast, \ \ \mbox{ for all }r\in (0,1].
\end{equation}
\begin{proposition}\label{key}
Let $N\geq 3$ and $\Psi:\overline{B_1}\setminus\{ 0\} \rightarrow {\mathbb R}$ be a radially symmetric function satisfying that there exists $C>0$ such that $\vert \Psi (r)\vert /r^2 \leq C$, for every $0<r\leq 1$, and
\begin{equation}\label{ineq}
\int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq\int_{B_1} \Psi\, \xi^2 \, dx
\end{equation}
\noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$.
Then
\begin{enumerate}
\item[i)] The problem
$$
\left\{
\begin{array}{rll}
-\Delta \omega(x)&={\displaystyleplaystyle \left( \Psi (x)-\frac{N-1}{\vert x\vert^2}\right) \omega (x)} \ \ \ \ \ \ & \mbox{ in } B_1 \, ,\\
\omega (x)&= 1 & \mbox{ on } \partial B_1 \, ,\\
\end{array}
\right. \eqno{(P_\Psi})
$$
\noindent has an unique solution $\omega\in W^{1,2}(B_1)$. Moreover $\omega$ is radial and strictly positive in $B_1\setminus \{ 0\}$ .
\
\item[ii)] If $\Psi_1 \leq \Psi_2$ in $\overline{B_1}\setminus \{ 0\} $ satisfy the above hypotheses and $\omega_i$ $(i=1,2)$ are the solutions of the problems
$(P_{\Psi_i})$ then $\omega_1 \leq \omega_2$ in $\overline{B_1}\setminus \{ 0\}$.
\end{enumerate}
\end{proposition}
\begin{proof}
i) By Hardy's inequality
$$\int_{B_1} \vert \nabla \xi\vert^2 \, dx\geq \frac{(N-2)^2}{4}\int_{B_1}\frac{\xi^2}{\vert x\vert^2} \, dx,$$
\noindent for every $\xi\in C^\infty (B_1)$ with compact support in $B_1$, we can define the functional $I:X\rightarrow {\mathbb R}$ by
$$I(\omega):=\frac{1}{2}\int_{B_1} \vert \nabla \omega \vert^2 dx-\frac{1}{2}\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega^2 dx,$$
\noindent for every $\omega\in X$, where $X=\left\{ \omega:B_1\rightarrow {\mathbb R} \mbox{ such that } \omega-1\in W_0^{1,2}(B_1)\right\} $.
It is immediate that
$$I'(\omega)(v)=\int_{B_1}\nabla \omega \nabla v \, dx-\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega v\, dx\, ; \ \ \ \omega\in X,v\in W_0^{1,2}(B_1).$$
Therefore to prove the existence of a solution of (P$_\Psi$) it is sufficient to show that $I$ has a global minimum in $X$. To do this, we first prove that $I$ is bounded from below in $X$. Taking $v=\omega -1$ in (\ref{ineq}) and applying Cauchy–Schwarz inequality we obtain
$$I(\omega)\geq \frac{1}{2}\int_{B_1}\Psi (\omega-1)^2 dx-\frac{1}{2}\int_{B_1} \left( \Psi-\frac{N-1}{\vert x\vert^2}\right) \omega^2 dx=$$
$$=\frac{1}{2}\int_{B_1} \Psi (-2\omega +1) dx+\frac{1}{2}\int_{B_1} \frac{N-1}{\vert x\vert^2} \omega^2 dx$$
$$\geq\frac{1}{2}\int_{B_1} \frac{-C (2\vert\omega\vert+1)+(N-1)\omega^2}{\vert x\vert^2}dx\geq\frac{1}{2}\int_{B_1}\frac{-C-C^2}{\vert x\vert^2}\, dx.$$
Hence $I$ is bounded from below in $X$. Take $\{ w_n\}\subset X$ such that $ I(\omega_n)\rightarrow \inf I$. Let us show that $\{ w_n\}$ is bounded in $W^{1,2}$. To this end, taking into account the above inequalities and that $-C(2\vert s\vert+1)+(N-1)s^2\geq -C(2\vert s\vert+1)+2s^2\geq s^2-C-C^2$ for every $N\geq 3$ and $s\in {\mathbb R}$, we have
$$I(\omega_n)\geq \frac{1}{2}\int_{B_1} \frac{-C(2 \vert\omega_n\vert+1)+(N-1)\omega_n^2}{\vert x\vert^2}dx\geq\frac{1}{2}\int_{B_1}\frac{\omega_n^2-C-C^2}{\vert x\vert^2}\, dx.$$
From this $\int_{B_1}\omega_n^2/\vert x\vert^2$ is bounded. Therefore $\int_{B_1}\Psi\omega_n^2$ is also bounded. From the definition of $I$ we conclude that $\int_{B_1}\vert \nabla \omega_n\vert^2$ is bounded, which clearly implies that $\{ w_n\}$ is bounded in $W^{1,2}$.
Since $X$ is a weakly closed subset of $W^{1,2}$, we have that, up to a subsequence, $\omega_n \rightharpoonup \omega_0\in X$. Taking $v=\omega_n-\omega_0$ in (\ref{ineq}) we deduce
\
$I(\omega_n)-I(\omega_0)$
$$=\frac{1}{2}\int_{B_1} \vert \nabla (\omega_n-\omega_0) \vert^2 dx-\frac{1}{2}\int_{B_1} \Psi (\omega_n-\omega_0)^2 dx+\frac{1}{2}\int_{B_1} \frac{(N-1)(\omega_n-\omega_0)^2}{\vert x\vert^2} dx$$
$$+\int_{B_1}\nabla \omega_0\nabla (\omega_n-\omega_0) dx-\int_{B_1}\Psi \omega_0 (\omega_n-\omega_0) dx+\int_{B_1}\frac{(N-1)\omega_0 (\omega_n-\omega_0)}{\vert x\vert^2}dx$$
$$\geq \int_{B_1}\nabla \omega_0\nabla (\omega_n-\omega_0) dx-\int_{B_1}\Psi \omega_0 (\omega_n-\omega_0) dx+\int_{B_1}\frac{(N-1)\omega_0 (\omega_n-\omega_0)}{\vert x\vert^2}dx.$$
Since $\omega_n-\omega_0 \rightharpoonup 0$, taking limit as $n$ tends to infinity in the above inequality we conclude
$$(\inf I)-I(\omega_0)\geq 0,$$
\noindent which implies that $I$ which attains its minimum at $\omega_0$. The existence of solution of (P$_\Psi$) is proven.
To show the uniqueness of solution suppose that there exists two solutions $\omega_1$ and $\omega_2$ of the same problem (P$_\Psi$). Then $\omega_2-\omega_1\in W_0^{1,2}$. By (\ref{ineq}) we have
$$0=I'(\omega_2)(\omega_2-\omega_1)-I'(\omega_1)(\omega_2-\omega_1)$$
$$=\int_{B_1} \vert \nabla (\omega_2-\omega_1) \vert^2 dx-\int_{B_1} \Psi (\omega_2-\omega_1)^2 dx+\int_{B_1} \frac{(N-1)(\omega_2-\omega_1)^2}{\vert x\vert^2} dx$$
$$\geq \int_{B_1} \frac{(N-1)(\omega_2-\omega_1)^2}{\vert x\vert^2} dx,$$
\noindent which implies that $\omega_1=\omega_2$. The uniqueness is proven.
The radial symmetry of the solution of (P$_\Psi$) follows easily from the uniqueness of solution and the radiality of the function $\Psi(x)-(N-1)/\vert x\vert^2$ and the boundary condition of the problem.
Finally, to prove that the solution $\omega$ of (P$_\Psi$) is strictly positive in $B_1\setminus \{ 0\}$ suppose, contrary to our claim, that there exists $r_0\in (0,1)$ such that $\omega(r_0)=0$ (with radial notation). Thus the function $v$ defined by $v=\omega$ in $B_{r_0}$ and $v=0$ in $B_1\setminus \overline{B_{r_0}}$ is in $W_0^{1,2}(B_1)$. By (\ref{ineq}) we have
$$0=I'(\omega)(v)=\int_{B_{r_0}}\vert \nabla \omega\vert^2 dx-\int_{B_{r_0}}\Psi \omega^2 dx+\int_{B_{r_0}}\frac{(N-1)\omega^2}{\vert x\vert^2}dx$$
$$\geq \int_{B_{r_0}}\frac{(N-1)\omega^2}{\vert x\vert^2}dx.$$
Therefore $\omega=0$ in $B_{r_0}$. In particular $\omega(r_0)=\omega'(r_0)=0$ (with radial notation), which implies, by the uniqueness of the corresponding Cauchy problem, that $\omega=0$ in $(0,1]$. This contradicts $\omega(1)=1$.
\
ii) Consider the function $v=(\omega_1-\omega_2)^+=\max\{0,\omega_1-\omega_2\}\in W_0^{1,2}(B_1)$ in the weak formulation of problem (P$_{\Psi_1}$). We have
$$0=\int_{B_1}\left(\nabla \omega_1 \nabla (\omega_1-\omega_2)^+ -\Psi_1 \omega_1 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_1 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$
Consider the same function $v=(\omega_1-\omega_2)^+$ in the weak formulation of problem (P$_{\Psi_2}$). Taking into account that $\Psi_1\leq \Psi_2$ and $\omega_2\geq 0$ we obtain
$$ 0=\int_{B_1}\left(\nabla \omega_2 \nabla (\omega_1-\omega_2)^+ -\Psi_2 \omega_2 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_2 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$
$$\leq \int_{B_1}\left(\nabla \omega_2 \nabla (\omega_1-\omega_2)^+ -\Psi_1 \omega_2 (\omega_1-\omega_2)^+ +\frac{(N-1)\omega_2 (\omega_1-\omega_2)^+ }{\vert x\vert^2}\right) dx$$
Subtracting the above two expressions it is follows that
$$0\geq\int_{B_1} \vert \nabla (\omega_1-\omega_2)^+ \vert^2 dx-\int_{B_1} \Psi_1 (\omega_1-\omega_2)^{+\, 2} dx+\int_{B_1} \frac{(N-1)(\omega_1-\omega_2)^{+\, 2}}{\vert x\vert^2} dx$$
$$\geq \int_{B_1} \frac{(N-1)(\omega_1-\omega_2)^{+\, 2}}{\vert x\vert^{2}} dx.$$
This implies $(\omega_1-\omega_2)^+=0$. Hence $\omega_1\leq \omega_2$, which is our claim.
\end{proof}
\noindent\textbf{Proof of Theorem \ref{limsup}.}
We first prove that $\lambda^\ast f'(u^\ast(r))\leq \lambda_1/r^2$ for every $r\in (0,1]$. To see this, let $0<\varphi_1$ be the first eigenfunction of the linear problem $-\Delta v=\lambda v$ in $B_1\subset {\mathbb R}^N$ with Dirichlet conditions $v=0$ on $\partial B_1$. Then $\int_{B_1} \vert\nabla \varphi_1 \vert^2=\lambda_1 \int_{B_1}\varphi_1^2$. By density, for arbitrary $0<r\leq 1$, we could take in (\ref{inequa}) the radial function $\xi=\varphi_1 (\cdot/r)$ in $B_r$ and $\xi=0$ in $B_1 \setminus \overline{B_r}$. Since $f'$ is nondecreasing and $u^\ast$ is radially decreasing, then $f'(u^\ast)$ is radially decreasing. An easy computation shows that
$$ \int_{B_1} \vert\nabla \xi \vert^2=\int_{B_r} \vert\nabla \xi \vert^2=r^{N-2} \int_{B_1} \vert\nabla \varphi_1 \vert^2=\lambda_1 r^{N-2} \int_{B_1}\varphi_1^2\ ,$$
$$\int_{B_1}\lambda^\ast f'(u^\ast) \xi^2=\int_{B_r}\lambda^\ast f'(u^\ast) \xi^2\geq\lambda^\ast f'(u^\ast(r)) \int_{B_r} \xi^2=\lambda^\ast f'(u^\ast(r)) r^N \int_{B_1}\varphi_1^2\ .$$
Combining this with (\ref{inequa}) we obtain the desired conclusion. Consequently $\limsup_{r\to 0} r^2 f'(u^\ast (r))\leq \lambda_1/\lambda^\ast$.
We now prove that $\limsup_{r\to 0} r^2 f'(u^\ast (r))\geq 2(N-2)/\lambda^\ast$. To obtain a contradiction, suppose that there exists $r_0\in (0,1]$ and $\varepsilon>0$ such that
\begin{equation}\label{ves}
\lambda^\ast f'(u^\ast (r))\leq \frac{2(N-2)-\varepsilon}{r^2},
\end{equation}
\noindent for every $r\in (0,r_0]$. Consider now the radial function $\omega (r):=u_r^\ast (r_0\, r)/u_r^\ast (r_0)$, defined in $\overline{B_1}\setminus\{ 0\}$. Applying (\ref{ahiledao}), an easy computation shows that $\omega(1)=1$ and
$$-\Delta \omega (r)=\frac{1}{u_r^\ast (r_0)}r_0^2\left( -\Delta (u_r^\ast (r_0\, r))\right)$$
$$=\frac{1}{u_r^\ast (r_0)}r_0^2\left( \lambda^\ast f'(u^\ast (r_0\, r))-\frac{N-1}{(r_0\, r)^2}\right) u_r^\ast (r_0\, r)=\left( \Psi (r)-\frac{N-1}{r^2}\right)\omega(r),$$
\
\noindent for every $r\in (0,1)$, where $\Psi(r):=r_0^2\lambda^\ast f'(u^\ast (r_0\, r))$. From (\ref{ves}) we obtain $\Psi(r)\leq \Psi_2(r):=(2(N-2)-\varepsilon)/r^2$ for every $r\in (0,1]$. It is easy to check that the solution $\omega_2$ of the problem $(P_{\Psi_2})$ is given by $w_2(r)=r^\alpha$ ($0<r\leq 1$) where
$$\alpha=\frac{2-N+\sqrt{(N-4)^2+4\varepsilon}}{2}.$$
Therefore, applying Proposition \ref{key}, we can assert that $0<\omega (r)\leq r^\alpha$ for every $r\in (0,1]$. It is clear that $\alpha>-1$. Hence $\omega\in L^1(0,1)$. This gives $u_r^\ast \in L^1(0,r_0)$, which contradicts the unboundedness of $u^\ast$. \qed
\begin{lemma}\label{AB}
Let $N\geq 10$ and $0<A<B\leq 1$. Define the radial function $\Psi_{A,B}:\overline{B_1}\setminus\{ 0\} \rightarrow {\mathbb R}$ by
$$\Psi_{A,B}(r):=\left\{
\begin{array}{ll}
0 & \mbox{ if } 0< r <A \, \\ \\
\displaystyleplaystyle{\frac{2(N-2)}{r^2}} & \mbox{ if } A\leq r\leq B \, ,\\ \\
0 & \mbox{ if } B<r\leq 1.
\end{array}
\right.
$$
Let $\omega[A,B]$ be the unique radial solution of $(P_{\Psi_{A,B}})$. Then
$$\lim_{s \to 0}\int_0^1\omega[s e^{-1/s^3},s](r)dr=+\infty.$$
\end{lemma}
\begin{proof}
We first observe that since $N\geq 10$ we have $2(N-2)\leq (N-2)^2/4$. Hence $0\leq \Psi_{A,B}\leq (N-2)^2/(4r^2)$ for every $0<r\leq 1$. Thus, by Hardy's inequality, $\Psi_{A,B}$ satisfies (\ref{ineq}) and we can apply Proposition \ref{key}.
We check at once that
$$\omega[A,B](r)=\left\{
\begin{array}{ll}\frac{N(N-4)B^{N-2}A^{-2}\ r}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } 0\leq r <A ,\\ \\
\frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}\ r^{3-N}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } A\leq r\leq B , \\ \\
\frac{\left( (N-2)^2B^{N-4}-4A^{N-4}\right) \ r\ +\ 2(N-2)B^N(B^{N-4}-A^{N-4})\ r^{1-N}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})} & \mbox{ if } B<r\leq 1.
\end{array}
\right.
$$
\
To see that $\omega[A,B]$ is the solution of (P$_{\Psi_{A,B}}$) it suffices to observe that $\omega[A,B]\in C^1(\overline{B_1}\setminus\{ 0\})\cap W^{1,2}(B_1)$ satisfies pointwise (P$_{\Psi_{A,B}}$) if $\vert x\vert \neq A,B$.
On the other hand, taking into account that $r^{3-N}\leq A^{4-N}r^{-1}$ if $A\leq r\leq B$, we have that
$$\omega[A,B](r)\geq \frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}A^{4-N}\ r^{-1}}{(N-2)^2B^{N-4}-4A^{N-4}+2(N-2)B^N(B^{N-4}-A^{N-4})}$$
$$\geq \frac{N(N-2)B^{N-2}\ r^{-1}\ -\ 2NA^{N-4}B^{N-2}A^{4-N}\ r^{-1}}{(N-2)^2B^{N-4}+2(N-2)B^N B^{N-4}}$$
$$=\frac{N(N-4)B^2 \ r^{-1}}{(N-2)^2+2(N-2)B^N} \, ,\ \ \mbox{ if } A\leq r\leq B.$$
\
From this and the positiveness of $\omega[A,B]$ it follows that
$$\int_0^1\omega[A,B](r)\geq\int_A^B\omega[A,B](r)dr\geq \int_A^B \frac{N(N-4)B^2 \ r^{-1}}{(N-2)^2+2(N-2)B^N}dr$$
$$=\frac{N(N-4)B^2 \ \log (B/A)}{(N-2)^2+2(N-2)B^N}.$$
\
Taking in this inequality $A=s e^{-1/s^3}$, $B=s$ (for arbitrary $0<s\leq 1$),
it may be concluded that
$$\int_0^1\omega[s e^{-1/s^3},s](r)dr\geq\frac{N(N-4)}{s\left( (N-2)^2+2(N-2)s^N\right)}$$
\
\noindent and the lemma follows.
\end{proof}
\begin{proposition}\label{peasofuncion}
Let $N\geq 10$ and $\varphi :(0,1)\rightarrow {\mathbb R}^+$ such that $\lim_{r\to 0} \varphi (r)=+\infty$. Then there exists $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ an unbounded radially symmetric decreasing function satisfying
\begin{enumerate}
\item[i)] $\displaystyleplaystyle{0<\Psi(r)\leq\ \frac{2(N-2)}{r^2}}$ and $\Psi'(r)<0$ for every $0<r\leq 1$.
\item[ii)] $\displaystyleplaystyle{\liminf_{r\to 0} \frac{\Psi(r)}{\varphi (r)}=0}$, $\displaystyleplaystyle{\limsup_{r\to 0} r^2 \Psi (r)=2(N-2)}$.
\item[iii)] $\displaystyleplaystyle{\int_0^1 \omega(r)dr=+\infty}$, where $\omega$ is the radial solution of (P$_\Psi$).
\end{enumerate}
\end{proposition}
\begin{proof}
Without loss of generality we can assume that $\varphi (r)\leq 2(N-2)/r^2$ for $r\in (0,1]$, since otherwise we can replace $\varphi$ with $\overline{\varphi}=\min\left\{ \varphi, 2(N-2)/r^2\right\} $. It is immediate that $\lim_{r\to 0} \varphi (r)=+\infty$ implies $\lim_{r\to 0} \overline{\varphi}(r)=+\infty$ and that $0\leq \liminf_{r\to 0}\Psi(r)/\varphi (r)\leq\liminf_{r\to 0} \Psi(r)/\overline{\varphi } (r)$.
We begin by constructing by induction two sequence $\{x_n\}$, $\{y_n\}\subset (0,1]$ in the following way: $x_1=1$ and, knowing the value of $x_n$ $(n\geq 1)$, take $y_n$ and $x_{n+1}$ such that
$$x_{n+1}<y_n<x_n e^{-1/x_n^3}<x_n,$$
\
\noindent where $y_n\in (0, x_n e^{-1/x_n^3})$ is chosen such that
$$\varphi(y_n)>(n+1)\frac{2(N-2)}{\left(x_n e^{-1/x_n^3}\right)^2},$$
\noindent which is also possible since $\lim_{r\to 0} \varphi (r)=+\infty$. The inequality $x_{n+1}<x_n e^{-1/x_n^3}$ for every integer $n\geq 1$ implies that $\{ x_n \}$ is a decreasing sequence tending to zero as $n$ goes to infinity. For this reason, to construct the radial function $\Psi$ in $B_1\setminus \{ 0\}$, it suffices to define $\Psi$ in every interval $[x_{n+1},x_n)=[x_{n+1},y_n)\cup [y_n, x_n e^{-1/x_n^3}]\cup (x_n e^{-1/x_n^3},x_n)$.
First, we define
$$\Psi(r):=\frac{2(N-2)}{r^2}, \ \ \ \mbox{ if } \ \ x_n e^{-1/x_n^3}<r<x_n,$$
$$\Psi (y_n):=\frac{\varphi (y_n)}{n+1}.$$
By the definition of $y_n$ we have that
$$\Psi (y_n)=\frac{\varphi (y_n)}{n+1}>\frac{2(N-2)}{\left(x_n e^{-1/x_n^3}\right)^2}\ \mbox{ and }\ \Psi (y_n)<\varphi(y_n)\leq\frac{2(N-2)}{y_n^2}.$$
Thus, it is a simple matter to see that it is possible to take a decreasing function $\Psi$ in $(y_n, x_n e^{-1/x_n^3}]$ such that $\Psi(r)<2(N-2)/r^2$ and $\Psi'(r)<0$ for $r\in(y_n, x_n e^{-1/x_n^3}]$ and $\Psi \in C^\infty ([y_n,x_n))$.
Finally, we will define similarly $\Psi$ in $[x_{n+1},y_n)$. Taking into account that
$$\Psi (y_n)<\varphi(y_n)\leq\frac{2(N-2)}{y_n^2}<\frac{2(N-2)}{x_{n+1}^2},$$
\noindent we see at once that it is possible to take a decreasing function $\Psi$ in $[x_{n+1}, y_n)$ such that
$$\Psi (x_{n+1})=\frac{2(N-2)}{x_{n+1}^2},$$
$$\partial_r^{(k)} \Psi (x_{n+1})=\partial_r^{(k)} \left(2(N-2)/r^2\right)(x_{n+1}), \ \ \mbox{for every } k\geq 1,$$
$$\Psi(r)<2(N-2)/r^2 \ \mbox{ and } \ \Psi'(r)<0 \ \ \ \mbox{for }r\in(x_{n+1},y_n),$$
$$\Psi \in C^\infty ([x_{n+1},x_n)).$$
Once we have constructed the radial function $\Psi$ it is evident that $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ an unbounded radially symmetric decreasing function satisfying i).
To prove ii) it is sufficient to observe that the sequences $\{x_n \}$, $\{ y_n \}$ tend to zero and satisfy $x_n^2 \Psi(x_n)=2(N-2)$ and $\Psi (y_n)/\varphi(y_n)=1/(n+1)$ for every integer $n\geq 1$.
It remains to prove iii). To this end consider an arbitrary $K>0$. Since $\{x_n \}$ tends to zero, applying Lemma \ref{AB} we can assert that there exists a natural number $m$ such that
$$\int_0^1\omega[x_m e^{-1/{x_m}^3},x_m](r)dr\geq K.$$
\
Observe that $\Psi\geq \Psi_{x_m e^{-1/{x_m}^3},x_m}$. By Proposition \ref{key} it follows that $\omega\geq\omega[x_m e^{-1/{x_m}^3},x_m]$. Thus
$$\int_0^1\omega (r) dr\geq \int_0^1\omega[x_m e^{-1/{x_m}^3},x_m](r)dr\geq K.$$
Since $K>0$ is arbitrary we conclude $\int_0^1\omega (r) dr=+\infty$.
\end{proof}
\
\noindent\textbf{Proof of Theorem \ref{liminf}.}
Consider the function $\Psi$ of Proposition \ref{peasofuncion} and let $\omega$ be the radial solution of $(P_\Psi)$. Since $\Psi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ we obtain $\omega\in C^\infty (\overline{B_1}\setminus \{ 0\})\cap W^{1,2}(B_1)$. Define the radial function $u$ by
$$u(r):=\int_r^1 \omega (t)dt, \ \ 0<r\leq 1.$$
It is obvious that $u\in C^\infty (\overline{B_1}\setminus \{ 0\})$. Since $u'=-\omega$ (with radial notation), we have $u\in W^{2,2}(B_1)\subset W^{1,2}(B_1)$. Moreover, from $\int_0^1 \omega(r)dr=+\infty$ we see that $u$ is unbounded.
On the other hand, since $u'=-\omega<0$ in $(0,1]$ (by Proposition \ref{key}), it follows that $u$ is a decreasing $C^\infty$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Therefore we can define $f\in C^\infty ([0,+\infty))$ by
$$f:=(-\Delta u)\circ u^{-1}.$$
\
We conclude that $u\in W_0^{1,2} (B_1)$ is an unbounded solution of (P$_\lambda$) for $\lambda=1$.
Now, substituting $u_r$ by $-\omega$ in (\ref{ahiledao}) it follows that
$$-\Delta (-\omega)+f'(u)(-\omega)=\frac{N-1}{r^2}(-\omega) \ \ \mbox{ for } 0<r\leq 1$$.
Hence, since $\omega$ is a solution of (P$_\Psi$) we obtain $f'(u)\omega=\Psi \omega$ in $(0,1]$. From $\omega>0$ in $(0,1]$ we conclude that
$$f'(u(x))=\Psi(x)\ \ \ \mbox{ for every } x\in \overline{B_1}\setminus \{ 0\}.$$
We now prove that $f$ satisfies (\ref{convexa}). To do this, we first claim that $\omega'(1)\geq -1$. Since $\Psi\leq 2(N-2)/r^2$, applying Proposition \ref{key} with $\Psi_1=\Psi$ and $\Psi_2=2(N-2)/r^2$, we deduce $\omega_1\leq \omega_2$, where $\omega_1=\omega$ and $\omega_2=r^{-1}$, as is easy to check. Since $\omega_1(1)=\omega_2(1)$ it follows $\omega_1'(1)\geq\omega_2'(1)=-1$, as claimed.
Thus
$$f(0)=f(u(1))=-\Delta u(1)=-u''(1)-(N-1)u'(1)=\omega'(1)+(N-1)\omega(1)$$
$$\geq (-1)+(N-1)>0.$$
On the other hand, since $f'(u(r))=\Psi(r)>0$ for every $r\in (0,1]$ it follows $f'>0$ in $[0,+\infty)$. Moreover $\lim_{s\to+\infty}f'(s)=\lim_{r\to 0}f'(u(r))=\lim_{r\to 0}\Psi(r)=+\infty$, and the superlinearity of $f$ is proven. Finally, to show the convexity of $f$, it suffices to differentiate the expression $f'(u)=\Psi$ with respect to $r$ (with radial notation), obtaining $u'(r)f''(u(r))=\Psi'(r)$ in $(0,1]$. Since $u'<0$ and $\Psi'<0$ we obtain $f''(u(r))>0$ in $(0,1]$, which gives the convexity of $f$ in $[0,+\infty)$.
Finally, we show that $u$ is a stable solution of $(P_\lambda)$ for $\lambda=1$. Since $N\geq 10$ then $2(N-2)\leq (N-2)^2/4$, hence
$$f'(u(r))=\Psi(r)\leq \frac{2(N-2)}{r^2}\leq \frac{(N-2)^2}{4r^2}\ \ \mbox{ for every } 0<r\leq 1.$$
Thus, by Hardy's inequality, we conclude that $u$ is a stable solution of $(P_\lambda)$ for $\lambda=1$.
On the other hand, in \cite[Th. 3.1]{BV} it is proved that if $f$ satisfies (\ref{convexa}) and $u\in W_0^{1,2}(\Omega)$ is an unbounded stable weak solution of ($P_\lambda$) for some $\lambda>0$, then $u=u^\ast$ and $\lambda=\lambda^\ast$. Therefore we conclude that $\lambda^\ast=1$, $u^\ast=u$ and
$$\liminf_{r\to 0}\frac{f'(u^\ast(r))}{\varphi (r)}=\liminf_{r\to 0}\frac{\Psi(r)}{\varphi(r)}=0.$$ \qed
\noindent\textbf{Proof of Theorem \ref{oscillation}.}
Take $\varphi(r)=1/r^2$, $0<r\leq1$, and consider the function $\Psi$ of Proposition \ref{peasofuncion}. Define
$$\Phi(r):=\frac{C_2-C_1}{2(N-2)}\Psi(r)+\frac{C_1}{r^2},$$
\noindent for every $0<r\leq 1$. Then it follows easily that $\Phi\in C^\infty (\overline{B_1}\setminus \{ 0\})$ is an unbounded radially symmetric decreasing function satisfying
\begin{enumerate}
\item[i)] $\displaystyleplaystyle{\Psi(r)\leq\Phi(r)\leq \frac{(N-2)^2}{4r^2}}$ and $\Phi'(r)<0$ for every $0<r\leq 1$.
\item[ii)] $\displaystyleplaystyle{\liminf_{r\to 0} r^2 \Phi(r)=C_1}$, $\displaystyleplaystyle{\limsup_{r\to 0} r^2 \Phi (r)=C_2}$.
\item[iii)] $\displaystyleplaystyle{\int_0^1 \varpi(r)dr=+\infty}$, where $\varpi$ is the radial solution of (P$_\Phi$).
\end{enumerate}
Note that iii) follows from Proposition \ref{key}, Proposition \ref{peasofuncion} and the fact that $\varpi\geq\omega$, being $\omega$ the radial solution of $(P_\Psi$).
The rest of the proof is very similar to that of Theorem \ref{liminf}. Since $\Phi\in C^\infty(\overline{B_1}\setminus \{ 0\})$ we obtain $\varpi\in C^\infty (\overline{B_1}\setminus \{ 0\})\cap W^{1,2}(B_1)$. Define the radial function $u$ by
$$u(r):=\int_r^1 \varpi (t)dt, \ \ 0<r\leq 1.$$
Analysis similar to that in the proof of Theorem \ref{liminf} shows that $u\in W^{2,2}$ is a decreasing $C^\infty$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Defining again $f:=(-\Delta u)\circ u^{-1}$, it is obtained that $f\in C^\infty ([0,+\infty))$. Thus $u\in W_0^{1,2} (B_1)$ is an unbounded solution of $(P_\lambda )$ for $\lambda=1$. It remains to prove that $f$ satisfies (\ref{convexa}). At this point, the only difference with respect to the proof of Theorem \ref{liminf} is that $\Phi(r)\leq\Psi_2(r):=(N-2)^2/(4r^2)$ implies that $\varpi\leq\omega_2$, being $\omega_2(r)=r^{-N/2+\sqrt{N-1}+1}$ the solution of the problem $(P_{\Psi_2})$. Hence $\varpi'(1)\geq\omega_2'(1)=-N/2+\sqrt{N-1}+1$. Therefore
$$f(0)=f(u(1))=-\Delta u(1)=-u''(1)-(N-1)u'(1)=\varpi'(1)+(N-1)\varpi(1)$$
$$\geq (-N/2+\sqrt{N-1}+1)+(N-1)>0.$$
The rest of the proof runs as before. \qed
\
\noindent\textbf{Proof of Theorem \ref{cualquiera}.}
Since $0<\Psi\leq (N-2)^2/(4r^2)$ we have that $\Psi$ satisfies the hypothesis of Proposition \ref{key}. Thus we can consider the solution $\omega$ of the problem $(P_\Psi)$. From $\Psi\in C(\overline{B_1}\setminus \{ 0\})$ it follow that $\omega\in C^2(\overline{B_1}\setminus \{ 0\})\cap W^{1,2} (B_1)$. On the other hand, since $\Psi(r)\geq \Psi_1(r):=2(N-2)/r^2$ for $0<r\leq 1$, we have that $\omega(r)\geq\omega_1(r):=r^{-1}$ for $0<r\leq 1$, where have used that $\omega_1$ is the solution of $(P_{\Psi_1})$ and we have applied Proposition \ref{key}. Define the radial function $u$ by
$$u(r):=\int_r^1 \omega (t)dt, \ \ 0<r\leq 1.$$
Therefore $u(r)\geq\vert\log r\vert$ for $0<r\leq 1$. In particular, $u$ is unbounded. From been proved, it follows that $u\in C^3(\overline{B_1}\setminus \{ 0\})\cap W^{2,2} (B_1)$. Hence (with radial notation) we have that $u$ is a decreasing $C^3$ diffeomorphism between $(0,1]$ and $[0,+\infty)$. Thus we can define $f\in C^1 ([0,+\infty))$ by
$$f:=(-\Delta u)\circ u^{-1}.$$
Analysis similar to that in the proof of Theorems \ref{liminf} and \ref{oscillation} shows that $f$ satisfies (\ref{convexa}), $\lambda^\ast=1$ and $u=u^\ast$.
Finally, to prove that $f$ is unique up to a multiplicative constant, suppose that $g$ is a function satisfying (\ref{convexa}), $\lambda^\ast=1$ and $g'(v^\ast(x))=\Psi (x)$, for every $x\in \overline{B_1}\setminus \{ 0\}$, where $v^\ast$ is the extremal solution associated to $g$. From (\ref{ahiledao}) we see that
$$ -\Delta v_r^\ast=\left(g'(v^\ast) -\frac{N-1}{r^2}\right) v_r^\ast, \ \ \mbox{ for all }r\in (0,1].$$
It follows immediately that $v_r^\ast (r)/v_r^\ast (1)$ is the solution of the problem $(P_\Psi)$. Since this problem has an unique solution we deduce that $v_r^\ast (r)/v_r^\ast (1)=\omega(r)=-u_r^\ast(r)$, for every $r\in (0,1]$. Thus $v_r^\ast =\alpha u_r\ast$ for some $\alpha >0$, which implies, since $v^\ast(1)=u^\ast(1)=0$, that $v^\ast =\alpha u^\ast$. The proof is completed by showing that
$$g(v^\ast (x))=-\Delta v^\ast (x)=\alpha (-\Delta u^\ast (x))=\alpha f(u^\ast(x))=\alpha f(v^\ast (x)/\alpha),$$
\noindent for every $x\in\overline{B_1}\setminus \{ 0\})$ and taking into account that $v^\ast\left(\overline{B_1}\setminus \{ 0\}\right)=[0,+\infty)$. \qed
\end{document}
|
math
|
\begin{document}
\title{A Proof Theory for Model Checking: An Extended Abstract}
\begin{abstract}
While model checking has often been considered as a practical
alternative to building formal proofs, we argue here that the theory
of sequent calculus proofs can be used to provide an appealing
foundation for model checking. Since the emphasis of model checking
is on establishing the truth of a property in a model, we rely on the
proof theoretic notion of additive inference rules, since such rules
allow provability to directly describe truth
conditions. Unfortunately, the additive treatment of quantifiers
requires inference rules to have infinite sets of premises and the
additive treatment of model descriptions provides no natural notion of
state exploration. By employing a focused proof system, it is
possible to construct large scale, synthetic rules that also qualify
as additive but contain elements of multiplicative inference. These
additive synthetic rules---essentially rules built from the
description of a model---allow a direct treatment of state
exploration. This proof theoretic framework provides a natural
treatment of reachability and non-reachability problems, as well as
tabled deduction, bisimulation, and winning strategies.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Model checking was introduced in the early 1980's as a way to
establish properties about (concurrent) computer programs that were
hard or impossible to establish using traditional, axiomatic proof
techniques such as those describe by Floyd and Hoare \cite{emerson08bmc}.
In this extended abstract we show that model checking can be given a proof
theoretic foundation using the sequent
calculus of Gentzen \cite{gentzen35}, the linear logic
of Girard \cite{girard87tcs}, and a treatment of fixed points
\cite{baelde12tocl,baelde07lpar,mcdowell00tcs,tiu12jal}.
The main purpose of this extended abstract
is foundational and conceptual.
Our presentation will not shed any new light on the algorithmic
aspects of model checking but it will show how model checkers can be
seen as having a ``proof search'' foundation shared with logic
programming and (inductive) theorem proving.
Since the emphasis of model checking is on establishing the truth of a
property in a model, a natural connection with proof theory is via the
use of \emph{additive} connectives and their inference rules.
We illustrate in Section~\ref{sec:prop} how the proof theory of
additive connectives naturally leads to the usual notion of
truth-table evaluation for propositional connectives.
Relying only on additive connectives, however, fails to provide an
adequate inference-based approach to model checking since it only
rephrases truth-functional semantic conditions and requires rules with
potentially infinite sets of premises.
The proof theory of sequent calculus contains additional inference
rules, namely, the \emph{multiplicative} inference rules which can be
used to encode much of the algorithmic aspects of model checking such as, for
example, those related to determining reachability and simulation
(or winning strategies).
In order to maintain a close connection between model checking and
truth in model, we shall put additive inference rules back in the
center of our framework but this time these rules will be
additive \emph{synthetic} inference rules.
The synthesizing process will allow multiplicative connectives and
inference rules to appear \emph{inside} the construction of synthetic
rules but they will not appear \emph{outside} such synthetic rules.
The construction of synthetic inference rules will be governed by the
well established proof theoretic notions of \emph{polarization} and
\emph{focused proof systems} \cite{andreoli92jlc,girard91mscs}.
The connection between the proof theory based on such synthetic
inference rules and model checking steps is close enough that
certificates for both reachability and non-reachability as well as
bisimulation and non-bisimulation are representable as sequent calculus
proofs.
\section{The basics of the sequent calculus}
\label{sec:basics}
Let $\Delta$ and $\Gamma$ range over \emph{multisets}
of formulas.
A \emph{sequent} is either one-sided, written $\seq{}{\Delta}$, or
two-sided, written $\seq{\Gamma}{\Delta}$ (we first consider two-sided
sequents in Section~\ref{sec:hypotheticals}).
Inference rules have one sequent as their conclusion and zero or more
sequents as premises.
We divide inference rules into three groups: the \emph{identity}
rules, the \emph{structural} rules, and the \emph{introduction} rules.
The following are the two structural rules and two identity rules we consider.
\[
\infer[\hbox{weaken}]{\seq{}{B,\Delta}}{\seq{}{\Delta}}
\qquad
\infer[\hbox{contraction}]{\seq{}{\Delta,B}}{\seq{}{\Delta,B,B}}
\qquad
\infer[\hbox{initial}]{\seq{}{B,\neg B}}{}
\qquad
\infer[\hbox{cut}]{\seq{}{\Delta,_1\Delta_2}}
{\seq{}{\Delta_1,B}\quad\seq{}{\Delta_2,\neg B}}
\]
The negation symbol $\neg(\cdot)$ is used here not as a logical
connective but as a function that computes the negation normal form of
a formula.
The remaining rules of the sequent calculus are introduction
rules: for these rules, a logical connective has an occurrence in the
conclusion and does not have an occurrence in the premises.
(We shall see several different sets of introduction inference rules
shortly.)
When a sequent calculus inference rule has two (or more) premises,
there are two natural schemes for managing the side formulas (i.e.,
the formulas not being introduced) in that rule. The following rules
illustrate these two choices for conjunction.
\[
\infer{\seq{}{B\wedge C,\Delta}}
{\seq{}{B,\Delta}\quad \seq{}{C,\Delta}}
\qquad
\infer{\seq{}{B\wedge C,\Delta_1,\Delta_2}}
{\seq{}{B,\Delta_1}\quad \seq{}{C,\Delta_2}}
\]
The choice on the left is the \emph{additive} version of the rule:
here, the side formulas in the conclusion are the same in all the
premises.
The choice on the right is the \emph{multiplicative} version of the
rule: here, the various side formulas of the premises are accumulated
to be the side formulas of the conclusion.
Note that the cut rule above is an example of a multiplicative
inference rule.
A logical connective with an additive right introduction rule is also
classified as additive. In addition, the de Morgan dual and the unit
of an additive connective are also additive connectives.
Similarly, a logical connective with a multiplicative
right-introduction rule is called multiplicative; so are its de Morgan
dual and their units.
The multiplicative and additive versions of inference rules are, in
fact, inter-admissible if the proof system contains weakening and
contraction.
In linear logic, where these structural rules are not available, the
conjunction and disjunction have additive versions $\mathbin{\&}$ and
$\mathbin{\oplus}$ and multiplicative versions $\mathbin{\otimes}$ and $\mathbin{\wp}$,
respectively, and these different versions of conjunction and
disjunction are not provably equivalent.
Linear logic provides two \emph{exponentials}, namely the $\mathop{!}$ and
$\mathop{?}$, that permit limited forms of the structural rules for suitable
formulas.
The familiar exponential law $x^{n+m}=x^n x^m$ extends to the logical
additive and multiplicative connectives since $\mathop{!}(B\mathbin{\&}
C)\equiv\mathop{!} B\mathbin{\otimes} \mathop{!} C$ and $\mathop{?}(B\mathbin{\oplus}
C)\equiv\mathop{?} B\mathbin{\wp} \mathop{?} C$.
While we are interested in model checking as it is practiced, we shall
be interested in only performing inference in classical logic.
One of the surprising things to observe about our proof theoretical
treatment of model checking is that almost all of it can be seen as
taking place within the proof theory of linear logic, a logic that
sits behind classical (and intuitionistic) logic.
As a result, the distinction between additive and multiplicative
connectives remains an important distinction for our framework.
Also, weakening and contraction will not be eliminated completely but
will be available for only certain formulas and in certain inference
steps (echoing the fact that in linear logic, these structural rules
can be applied to formulas annotated with exponentials).
\section{Additive propositional connectives}
\label{sec:prop}
Let ${\cal A}$ be the set of formulas built from the propositional
connectives $\{\wedge,t,\vee,f\}$ (no propositional
constants included).
Consider the following small proof system involving one-sided sequents.
\[
\infer{\seq{}{B_1\wedge B_2,\Delta}}
{\seq{}{B_1,\Delta}\quad \seq{}{B_2,\Delta}}
\qquad
\infer{\seq{}{t,\Delta}}{}
\qquad
\infer{\seq{}{B_1\vee B_2,\Delta}}
{\seq{}{B_1,\Delta}}
\qquad
\infer{\seq{}{B_1\vee B_2,\Delta}}
{\seq{}{B_2,\Delta}}
\]
Here, $t$ is the unit of $\wedge$, and $f$ is the unit of $\vee$.
Notice that $\vee$ has two introduction rules while $f$ has none.
Also, $t$ and $\wedge$ are de Morgan duals of $f$ and $\vee$,
respectively.
We say that the
multiset $\Delta$ is provable if and only if there is a proof of
$\seq{}{\Delta}$ using these inference rules.
Also, we shall consider no additional inference rules (that is, no
contraction, weakening, initial, or cut rules): this
inference system is composed only of introduction rules and all of
these introduction rules are for \emph{additive} logical connectives.
The following theorem identifies an important property of this purely
additive setting.
This theorem is proved by a straightforward induction on the structure
of proofs.
\begin{theorem}[Strengthening]
\label{thm:strength}
If $\Delta$ is a multiset of ${\cal A}$-formulas and $\seq{}{\Delta}$
then $\exists\; B\in\Delta$ such that $\seq{}{B}$.
\end{theorem}
This theorem shows that provability of purely additive
formulas is independent of their context.
It also establishs that the proof system is consistent, since the
empty sequent $\seq{}{\cdot}$ is not provable.
The following three theorems state that the missing inference rules of
weakening, contraction, initial, and cut are all admissible in this
proof system. The first theorem is an immediate consequence of
Theorem~\ref{thm:strength}. The following two theorems are proved,
respectively, by induction on the structure of formulas and by
induction on the structure of proofs.
\begin{theorem}[Weakening \& contraction admissibility]
\label{thm:wc}
Let $\Delta_1$ and $\Delta_2$ be multisets of ${\cal A}$-formulas
such that $\Delta_1$ is a subset of $\Delta_2$ (when viewed as sets).
If $\seq{}{\Delta_1}$ is provable then $\seq{}{\Delta_2}$ is
provable.
\end{theorem}
\begin{theorem}[Initial admissibility]
\label{thm:init}
Let $B$ be a ${\cal A}$-formula. Then $\seq{}{B,\neg B}$ is provable.
\end{theorem}
\begin{theorem}[Cut admissibility]
\label{thm:cut}
Let $B$ be an ${\cal A}$-formula and let $\Delta_1$ and $\Delta_2$ be
multisets of ${\cal A}$-formulas.
If both $\seq{}{B,\Delta_1}$ and $\seq{}{\neg B,\Delta_2}$ are
provable, then there is a proof of $\seq{}{\Delta_1,\Delta_2}$.
\end{theorem}
These theorems lead to the following truth-functional semantics for
${\cal A}$ formulas:
define $\valuation{\cdot}$ as a mapping from ${\cal A}$
formulas to booleans such that $\valuation{B}$ is $t$ if $\seq{}{B}$ is
provable and is $f$ if $\seq{}{\neg B}$ is provable.
Theorem~\ref{thm:init} implies that
$\valuation{\cdot}$ is always defined and Theorem~\ref{thm:cut}
implies that $\valuation{\cdot}$ is functional
(does not map a formula to two different booleans).
The introduction rules describe this function
\emph{denotationally}: e.g., $\valuation{A\wedge B}$ is the
truth-functional conjunction of $\valuation{A}$ and $\valuation{B}$
(similarly for $\vee$).
While this logic of ${\cal A}$-formulas is essentially trivial, we will
soon introduce much more powerful additive inference rules: their
connection to truth functional interpretations (a la model checking
principles) will arise from the fact that their provability is not
dependent on other formulas in a sequent.
\section{Additive first-order structures}
\label{sec:fos}
We move to first-order logic by adding terms, equality on terms, and
quantification.
We shall assume that some \emph{ranked signature} $\Sigma$ of term
constructors is given: such a signature associates to every
constructor a natural number indicating that constructor's arity.
Term constants are identified with signature items given rank 0. A
$\Sigma$\emph{-term} is a (closed) term built from only constructors
in $\Sigma$ and obeying the rank restrictions. For example, if
$\Sigma$ is $\{a/0, b/0, f/1, g/2\}$, then $a$, $(f~a)$, and
$(g~(f~a)~b)$ are all $\Sigma$-terms.
We shall consider only signatures for which there exist
$\Sigma$-terms: for example, the set $\{f/1, g/2\}$ is not a valid
signature.
The usual symbols $\forall$ and $\exists$ will be used for the
universal and existential quantification over terms.
We assume that these quantifiers range over $\Sigma$-terms for some
fixed signature.
The arities of ranked signatures will often not be listed explicitly.
The equality and inequality of terms will be treated as (de Morgan
dual) logical connectives in the sense that their meaning is given by
the following introduction rules.
\[
\infer{\seq{}{t=t,\Delta}}{}
\qquad\qquad
\infer[\hbox{$t$ and $s$ differ}]{\seq{}{t\not= s,\Delta}}{}
\]
Here, $t$ and $s$ are $\Sigma$-terms for some ranked
signature $\Sigma$.
Consider (only for the scope of this section) the following two
inference rules for quantification. In these introduction rules,
$[t/x]$ denotes the capture-avoiding substitution.
\[
\infer[\exists]{\seq{}{\exists x.B,\Delta}}{\seq{}{B[t/x],\Delta}}
\qquad
\infer[\hbox{$\forall$-ext}]
{\seq{}{\forall x.B,\Delta}}
{ \{~\seq{}{B[t/x],\Delta}~~|~~\Sigma\hbox{-term}~t~\} }
\]
Although $\forall$ and $\exists$ form a de Morgan dual pair, the rule
for introducing the universal quantifier is not the standard one used
in the sequent calculus (we will introduce the standard one later).
This rule, which is similar to the $\omega$-rule \cite{schwichtenberg77hml},
is an extensional approach to modeling quantification:
a universally quantified formula is true if all instances of it are true.
Consider now the logic built with the (additive) propositional
constants of the previous section and with equality, inequality, and
quantifiers.
The corresponding versions of all four theorems in
Section~\ref{sec:prop} holds for this logic.
Similarly, we can extend the evaluation function for
${\cal A}$-formulas to work for the quantifiers: in particular,
$\valuation{\forall x. B x}=\bigwedge_t \valuation{B t}$ and
$\valuation{\exists x. B x}=\bigvee_t \valuation{B t}$.
Such a result is not surprising, of course, since we have repeated
within inference rules the usual semantic conditions.
The fact that these theorems hold indicates that the proof theory we
have presented so far offers nothing new over truth functional
semantics.
Similarly, this bit of proof theory offers nothing appealing to model
checking, as illustrated by the following example.
\begin{example}
\label{ex:subset}
Let $\Sigma$ contain the ranked symbols $z/0$ and $s/1$ and let us
abbreviate the terms $z$, $(s~z)$, $(s~(s~z))$, $(s~(s~(s~z)))$, etc
by {\bf 0}, {\bf 1}, {\bf 2}, {\bf 3}, etc.
Let $A$ and $B$ be the set of terms $\{{\bf 0}, {\bf 1}\}$ and
$\{{\bf 0}, {\bf 1}, {\bf 2}\}$, respectively.
These sets can be encoded as the predicate expressions
$\lambda x. x={\bf 0}\vee x={\bf 1}$ and $\lambda x. x={\bf 0}\vee
x={\bf 1}\vee x={\bf 2}$.
The fact that $A$ is a subset of $B$ can be denoted by
the formula $\forall x. \neg(A\,x)\vee B\,x$ or, equivalently, as
\[
\forall x. (x\not={\bf 0}\wedge x\not={\bf 1})\vee x={\bf 0}\vee x={\bf 1}\vee x={\bf 2}
\]
Proving this formula requires an infinite number of premises of the
form $(t\not={\bf 0}\wedge t\not={\bf 1})\vee t={\bf 0}\vee t={\bf
1}\vee t={\bf 2}$. Since each of these premises can, of course, be
proved, the original formula is provable, albeit with an ``infinite
proof''.
\end{example}
While determining the subset relation between two finite sets is a
typical example of a model checking problem, one would not use the
above-mentioned inference rule for $\forall$ except in the extreme
cases where there is a finite and small set of $\Sigma$-terms.
As we can see, the additive inference rule for
$\forall$-quantification generally leads to ``infinitary proofs''
(an oxymoron that we now avoid at all costs).
\section{Multiplicative connectives}
\label{sec:hypotheticals}
Our departure from purely additive inference rules now seems forced
and we continue by adding multiplicative inference rules.
Our first multiplicative connective is the intuitionistic implication:
since the most natural
treatment of this connective uses two-sided sequents, we make the move
away from the one-sided sequents that we have presented so far (see
Figure~\ref{fig:new}).
Note that taking the two multiplicative rules of implication right
introduction and initial yields a proof system that violates the
strengthening theorem (Section~\ref{sec:prop}):
the sequent $\seq{}{p\mathbin{\supset} q,p}$ is provable while neither
$\seq{}{p\mathbin{\supset} q}$ nor $\seq{}{p}$ are provable.
A common observation in proof theory is that the curry/uncurry
equivalence between $A\mathbin{\supset} B\mathbin{\supset} C$ and $(A\wedge B)\mathbin{\supset} C$ can be
mimicked precisely by the proof system: in this case, such precision
does not occur with the additive rules for conjunction but
rather with the multiplicative version of conjunction.
To this end, we add the multiplicative conjunction
$\mathbin{\wedge\kern-1.5pt^+}$ and its unit $\bar{t}rue^+$ and, for the sake of symmetry, we rename
$\wedge$ as $\mathbin{\wedge\kern-1.5pt^-}$ and $t$ to $\bar{t}rue^-$.
(The plus and minus symbols are related to the polarization of logical
connectives that is behind the construction of synthetic connectives.)
These two conjunctions and two truth symbols are logically equivalent
in classical and intuitionistic logic although they are different in
linear logic where it is more traditional to write $\mathbin{\&}$, $\bar{t}op$,
$\mathbin{\otimes}$, $\mathbf{1}$ for $\mathbin{\wedge\kern-1.5pt^-}$, $\bar{t}rue^-$, $\mathbin{\wedge\kern-1.5pt^+}$, $\bar{t}rue^+$,
respectively.
The ``multiplicative false'' $\false^-$ (written as $\perp$ in linear
logic) can be defined as $t\not=t$ (assuming that there is a
first-order term $t$).
Eigenvariables are binders at the sequent level that
align with binders within formulas (i.e., quantifiers).
Binders are an intimate and low-level feature of logic: the addition
of eigenvariables requires redefining the notions of term and sequent.
\begin{figure}
\caption{Introduction rules for propositional constants,
quantifiers, and equality. The $\exists$ rule is
restricted so that $t$ is a $\Sigma({\cal X}
\label{fig:new}
\end{figure}
Let the set ${\cal X}$ denote \emph{first-order variables} and let
$\Sigma({\cal X})$ denote all terms built from constructors in $\Sigma$
and from the variables ${\cal X}$: in the construction of
$\Sigma({\cal X})$-terms, variables act as constructors of arity 0.
(We assume that $\Sigma$ and ${\cal X}$ are disjoint.)
A $\Sigma({\cal X})$\emph{-formula} is one where all term constructors
are taken from $\Sigma$ and all free variables are contained in
${\cal X}$.
Sequents are now written as $\seqx{\Gamma}{\Delta}$: the
intended meaning of such a sequent is that the variables in the set
${\cal X}$ are bound over the formulas in $\Gamma$ and $\Delta$.
We shall also assume that formulas in $\Gamma$ and $\Delta$ are all
$\Sigma({\cal X})$-formulas.
All inference rules are modified to account for this additional
binding: see Figure~\ref{fig:new}.
The variable $y$ used in the $\forall$ introduction rule is called, of
course, an eigenvariable.
The left introduction rules for equality in Figure~\ref{fig:new}
significantly generalizes the version involving only closed terms by
making reference to unifiability and to most general unifiers.
In the latter case, the domain of the substitution $\bar{t}heta$ is a
subset of ${\cal X}$, and the set of variables $\bar{t}heta{\cal X}$ is the result of
removing from ${\cal X}$ all the variables in the domain of $\bar{t}heta$ and
then adding in all those variables free in the range of $\bar{t}heta$.
This treatment of equality was developed independently by
Schroeder-Heister \cite{schroeder-heister93lics} and Girard
\cite{girard92mail} and has been extended to include simply typed
$\lambda$-terms \cite{mcdowell00tcs}.
While the use of eigenvariables in proofs allows us to deal with
quantifiers with finite proofs, that treatment is not directly
related to model theoretic semantics.
In particular, the strengthening theorem does not hold for this proof
system.
As a result, obtaining a soundness and completeness theorem for
this logic is no longer trivial.
The inference rules in Figure~\ref{fig:new} provide a
proper proof of the theorem considered in Example~\ref{ex:subset}.
\begin{example}
\label{ex:subset again}
Let $\Sigma$ and the sets $A$ and $B$ be as in
Example~\ref{ex:subset}. Showing that $A$ is a subset of $B$ requires
showing that the formula $\forall x (A x\mathbin{\supset} B x)$ is provable. That is,
we need to find a proof of the sequent
$
\seq{}{\forall x.(x={\bf 0}\vee x={\bf 1})
\mathbin{\supset} (x={\bf 0}\vee x={\bf 1}\vee x={\bf 2})}.
$
The following proof of this sequent uses the rules from
Figure~\ref{fig:new}: a double line means that two or more inference
rules might be chained together.
\begin{footnotesize}
\[
\infer={\seqx[\cdot]{\cdot}{\forall x.(x={\bf 0}\vee x={\bf 1})
\mathbin{\supset} (x={\bf 0}\vee x={\bf 1}\vee x={\bf 2})}}{
\infer{\seqx[x]{x={\bf 0}\vee x={\bf 1}}{x={\bf 0}\vee x={\bf 1}\vee x={\bf 2}}}{
\infer{\seqx[x\kern-1pt]{x={\bf 0}}{x={\bf 0}\vee x={\bf 1}\vee x={\bf 2}}}{
\infer={\seqx[\cdot]{\cdot}{{\bf 0}={\bf 0}\vee {\bf 0}={\bf 1}\vee {\bf 0}={\bf 2}}}{
\infer{\seqx[\cdot]{\cdot}{{\bf 0}={\bf 0}}}{}}}
&
\infer{\seqx[x\kern-1pt]{x={\bf 1}}{x={\bf 0}\vee x={\bf 1}\vee x={\bf 2}}}{
\infer={\seqx[\cdot]{\cdot}{{\bf 1}={\bf 0}\vee {\bf 1}={\bf 1}\vee {\bf 1}={\bf 2}}}{
\infer{\seqx[\cdot]{\cdot}{{\bf 1}={\bf 1}}}{}}}}}
\]
\end{footnotesize}
The proof in this example is able to account for
a simple version of ``reachability'' in the sense that we only need to
consider checking membership in set $B$ for just those elements
``reached'' in $A$.
\end{example}
\section{Fixed points}
\label{sec:fixed}
A final step in building a logic that can start to provide a
foundation for model checking is the addition of least and greatest
fixed points and their associated rules for induction and coinduction.
Given that processes generally exhibit potentially infinite behaviors
and that term structures are not generally bounded in their size, it
is important for a logical foundation of model checking to allow for
some treatment of infinity.
The logic described by the proof system in
Figure~\ref{fig:new} is a two-sided version of MALL\bar{x}spaceeq (multiplicative
additive linear logic extended with first-order quantifiers and
equality) \cite{baelde07lpar}.
The decidability of this logic is easy to show: as one moves from
conclusion to premise in every inference rule, the number of
occurrences of logical connectives decrease.
As a result, it is a simple matter to write an exhaustive search
procedure that must necessarily terminate (such a search procedure can
also make use of the decidable procedure of first-order unification).
In order to extend the expressiveness of MALL, Girard added the
exponentials $\mathop{!}$, $\mathop{?}$ to MALL to get full linear logic
\cite{girard87tcs}. The standard inference rules for exponentials
allows for some forms of the contraction rule
(Section~\ref{sec:basics}) to appear in proofs and, as a result,
provability is no longer decidable. A different approach to extending
MALL with the possibility of having unbounded behavior was proposed in
\cite{baelde07lpar}: add to MALL\bar{x}spaceeq the least and greatest fixed point
operators, written as $\mu$ and $\nu$, respectively. The proof
theory of the resulting logic, called \ensuremath{\mu\bar{t}ext{MALL}}eq, was been developed in
\cite{baelde12tocl} and exploited in a prototype model checker
\cite{baelde07cade}.
Fixed point expressions are written as $\mu{}B\,\bar{t}$ or
$\nu{}B\,\bar{t}$, where $B$ is an expression representing a monotonic
higher-order abstraction, and $\bar{t}$ is a list of terms;
by monotonic, we mean that the higher-order argument of $B$ can only
occur in $B$ under even numbers of negations.
The unfolding of the fixed
point expressions $\mu B\,\bar{t}$ and $\nu B\,\bar{t}$ are $B(\mu{}B)\,\bar{t}$ and
$B(\nu{}B)\,\bar{t}$, respectively.
\begin{example}
\label{ex:graph}
Horn clauses (in the sense of Prolog) can be encoded as purely
positive fixed point expressions. For example, here is the Horn
clause logic program (using the $\lambda$Prolog syntax, the
\verb+sigma Y\+ construction encodes the quantifier $\exists{}Y$)
for specifying a (tiny) graph and its transitive closure:
\begin{verbatim}
step a b. step b c. step c b.
path X Y :- step X Y.
path X Z :- sigma Y\ step X Y, path Y Z.
\end{verbatim}
We can translate the \verb.step. relation into the binary predicate
$\one{\cdot}{}{\cdot}$ defined by
\begin{align*}
\mu(\lambda{}A\lambda{}x\lambda{}y.\,
(x=a\mathbin{\wedge\kern-1.5pt^+} y=b)\mathbin{\vee}&(x=b\mathbin{\wedge\kern-1.5pt^+} y=c)\mathbin{\vee}(x=c\mathbin{\wedge\kern-1.5pt^+} y=b))
\end{align*}
which only uses positive connectives. Likewise, \verb.path. can be
encoded as the relation $\hbox{\sl path}(\cdot,\cdot)$:
\begin{align*}
\mu(\lambda{}A\lambda{}x\lambda{}z.\,
\one{x}{}{z}\mathbin{\vee}(\exists{}y.\,\one{x}{}{y}\mathbin{\wedge\kern-1.5pt^+}{}A\,y\,z)).
\end{align*}
To illustrate unfolding of the adjacency relation, note that unfolding
the expression $\one{a}{}{c}$ yields the formula
$ (a=a\mathbin{\wedge\kern-1.5pt^+} c=b)\mathbin{\vee}(a=b\mathbin{\wedge\kern-1.5pt^+} c=c)\mathbin{\vee}(a=c\mathbin{\wedge\kern-1.5pt^+} c=b)$
which is not provable. Unfolding the expression $\hbox{\sl path}(a,c)$ and
performing $\beta$-reductions yields the expression
$ \one{a}{}{c}\mathbin{\vee}(\exists{}y.\,\one{a}{}{y}\mathbin{\wedge\kern-1.5pt^+}{}\hbox{\sl path}\,y\,c).$
\end{example}
\begin{figure}
\caption{Introduction rules for least ($\mu$) and greatest ($\nu$) fixed points}
\label{fig:fixed}
\end{figure}
In \ensuremath{\mu\bar{t}ext{MALL}}eq, both $\mu$ and $\nu$ are treated as logical connectives
in the sense that they will have introduction rules. They are also de
Morgan duals of each other. The inference rules for treating fixed
points are given in Figure~\ref{fig:fixed}. The rules for induction
and coinduction ($\mu L$ and $\nu R$, respectively) use a higher-order
variable $S$ which represents the invariant and coinvariant in these
rules. As a result, it will not be the case that cut-free proofs
will necessarily have the
sub-formula properties: the invariant and coinvariant are not
generally subformulas of the rule that they conclude.
The following unfolding rules are also admissible since they
can be derived using induction and coinduction.
\[
\begin{array}{c}
\infer{\seqx{\Gamma,\mu{}B\bar{t}}{\Delta}}
{\seqx{\Gamma,B(\mu{}B)\bar{t}}{\Delta}}
\qquad
\infer{\seqx{\Gamma}{\nu{}B\bar{t},\Delta}}
{\seqx{\Gamma}{B(\nu{}B)\bar{t},\Delta}}
\end{array}
\]
The introduction rules in Figures~\ref{fig:new}
and~\ref{fig:fixed} are exactly the introduction rules of \ensuremath{\mu\bar{t}ext{MALL}}eq,
except for two shallow differences. The first difference is that the
usual presentation of \ensuremath{\mu\bar{t}ext{MALL}}eq is via one-sided sequents (here, we
use two-sided sequents). The second difference is that we have written
many of the connectives differently (hoping that our set of
connectives will feel more comfortable to those not familiar with
linear logic). To be precise, to uncover the linear logic
presentation of formulas, one must translate
$\mathbin{\wedge\kern-1.5pt^-}$, $\bar{t}rue^-$, $\mathbin{\wedge\kern-1.5pt^+}$, $\bar{t}rue^+$, $\vee$, and $\supset$ to
$\mathbin{\&}$, $\bar{t}op$, $\mathbin{\otimes}$, $\mathbf{1}$, $\oplus$, and $\mathbin{-\hspace{-0.70mm}\circ}$
\cite{girard87tcs}. Note that the linear implication $B\mathbin{-\hspace{-0.70mm}\circ} C$ can
be taken as an abbreviation of $\neg B\mathbin{\wp} C$.
The following example shows that it is possible to prove some negations
using either unfolding (when there are no cycles in the resulting
state exploration) or induction.
\begin{example}
\label{ex:negation}
Below is a proof that the node $a$ is not adjacent to $c$: the first
step of this proof involves unfolding the definition of the adjacency
predicate into its description.
\vskip-8pt
\begin{footnotesize}
\[
\infer{\seq{\one{a}{}{c}}{\cdot}}{
\infer{\seq{(a=a\mathbin{\wedge\kern-1.5pt^+} c=b)\mathbin{\vee}(a=b\mathbin{\wedge\kern-1.5pt^+} c=c)\mathbin{\vee}(a=c\mathbin{\wedge\kern-1.5pt^+} c=b)}{\cdot}}{
\infer{\seq{ a=a\mathbin{\wedge\kern-1.5pt^+} c=b}{\cdot}}{\infer{\seq{a=a,c=b}{\cdot}}{}}
\quad
\infer{\seq{ a=b\mathbin{\wedge\kern-1.5pt^+} c=c}{\cdot}}{\infer{\seq{a=b,c=c}{\cdot}}{}}
\quad
\infer{\seq{ a=c\mathbin{\wedge\kern-1.5pt^+} c=b}{\cdot}}{\infer{\seq{a=c,c=b}{\cdot}}{}}}}
\]
\vskip -5pt
\end{footnotesize}
\noindent A simple proof exists for $\hbox{\sl path}(a,c)$: one simply unfolds the fixed
point expression for $\hbox{\sl path}(\cdot,\cdot)$ and chooses correctly when
presented with a disjunction and existential on the right of the
sequent arrow.
Given the definition of the path predicate, the following rules are
clearly admissible. We write $\bar{t}up{t,s}\in\hbox{\sl Adj}$ whenever
$\seq{\cdot}{\one{t}{}{s}}$ is provable.
\[
\infer[\bar{t}up{t,s}\in\hbox{\sl Adj}]{\seqx{\Gamma,\hbox{\sl path}(t,s)}{\Delta}}{\seqx{\Gamma}{\Delta}}
\qquad
\infer{\seqx{\Gamma,\hbox{\sl path}(t,y)}{\Delta}}{\{\seqx{\Gamma,\hbox{\sl path}(s,y)}{\Delta}\ |
\ \bar{t}up{t,s}\in\hbox{\sl Adj}\}}
\]
The second rule has a premise for every pair $\bar{t}up{t,s}$ of adjacent
nodes: if $t$ is adjacent to no nodes, then this rule has no premises
and the conclusion is immediately proved.
A naive attempt to prove that there is no path from $c$ to $a$ gets
into a loop (using these admissible rules): attempt to prove
$\seq{\hbox{\sl path}(c,a)}{\cdot}$
leads to an attempt to prove
$\seq{\hbox{\sl path}(b,a)}{\cdot}$
and again attempting to prove
$\seq{\hbox{\sl path}(c,a)}{\cdot}$.
Such a cycle can be examined to yield an invariant that makes it
possible to prove the end-sequent. In particular, the set of nodes
reachable from $c$ is $\{b,c\}$, subset of $N=\{a,b,c\}$. The
invariant $S$ can be
described as the set which is the complement (with respect to $N\bar{t}imes
N$) of the set $\{b,c\}\bar{t}imes\{a\}$, or equivalently as the predicate
$\lambda x\lambda y. \bigvee_{\bar{t}up{u,v}\in S} (x=u\mathbin{\wedge\kern-1.5pt^+} y=v)$.
With this invariant, the induction rule
($\mu L$) yields two premises. The left premise simply needs to
confirm that the pair $\bar{t}up{c,a}$ is not a member of $S$.
The right premise sequent $\seqx[\bar{x}]{BS\bar{x}}{S\bar{x}}$ establishes that $S$
is an invariant for the $\mu{}B$ predicate.
In the present case, the argument list $\bar{x}$ is just a pair of
variables, say, $x,z$, and $B$ is the body of the $\hbox{\sl path}$ predicate:
the right premise is the sequent
$
\ x,z\; ;\; \seq{\one{x}{}{z}\mathbin{\vee}(\exists{}y.\,\one{x}{}{y}\mathbin{\wedge\kern-1.5pt^+}{}S\,y\,z)}{S\,x\,z}.
$
A formal proof of this follows easily by blindly applying applicable
inference rules.
\end{example}
While the rules for fixed points (via induction and coinduction) are
strong enough to transform cyclic behaviors into, for example,
non-reachabilty or (bi)simulation assertions,
these rules are not strong enough to prove
other simple facts about fixed points. For
example, consider the following two named fixed point expressions used
for identifying natural numbers and the ternary relation of addition.
\begin{align*}
\hbox{\sl nat}\xspace= &\mu\lambda N\lambda n(n=z\vee\exists n'(n=s n'\mathbin{\wedge\kern-1.5pt^+} N~n'))\\
\hbox{\sl plus}\xspace = &\mu\lambda P\lambda n\lambda m\lambda p
((n=z\mathbin{\wedge\kern-1.5pt^+} m=p)\vee
\exists n'\exists p'(n=s n'\mathbin{\wedge\kern-1.5pt^+} p=s p'\mathbin{\wedge\kern-1.5pt^+} P~n'~m~p'))
\end{align*}
The following formula (stating that the addition of two numbers is commutative)
\[
\forall n\forall m\forall p.
\nat~n \supset \nat~m \supset \hbox{\sl plus}\xspace~n~m~p\supset\hbox{\sl plus}\xspace~m~n~p
\]
is not provable using the inference rules we have described. The
reason that this formula does not have a proof is not because the
induction rule ($\mu L$ in Figure~\ref{fig:fixed}) is not strong
enough or that we are actually sitting inside linear logic: it is
because an essential feature of inductive arguments is missing.
Consider attempting a proof by induction that the property $P$ holds
for all natural numbers. Besides needing to prove that $P$ holds of
zero, we must also introduce an arbitrary integer $j$ (corresponding
to the eigenvariables of the right premise in $\mu L$) and show that
the statement $P(j+1)$ reduces to the statement $P(j)$. That is,
after manipulating the formulas describing $P(j+1)$ we must be able to
find in the resulting argument, formulas describing $P(j)$. Up until
now, we have only ``performed'' formulas (by applying introduction
rules) instead of checking them for equality. More specifically, while
we do have a logical primitive for checking equality of terms, the
proof system described so far does not have an equality for comparing
formulas. As a result, some of the most basic theorems are not
provable in this system. For example, there is no proof of
$\forall n.(\nat~n \supset \nat~n)$.
Model checking is not the place where we should
be attempting proofs involving arbitrary infinite domains: inductive theorem
provers are used for that. If we restrict to finite domains,
however, proofs appear. For example, consider the less-than binary
relation defined as
\begin{align*}
\hbox{\sl lt}\xspace = \mu \lambda L\lambda x\lambda y
((x = z \mathbin{\wedge\kern-1.5pt^+} \exists y'. y = s y') \vee
(\exists x'\exists y'. x = s x' \mathbin{\wedge\kern-1.5pt^+} y = s y'\mathbin{\wedge\kern-1.5pt^+} L~x'~y'))
\end{align*}
The formula $(\forall n. \hbox{\sl lt}\xspace~n~{\bf 10}\supset \hbox{\sl lt}\xspace~n~{\bf 10})$
has a proof that involves generating all numbers
less than 10 and then showing that they are, in fact, all less than 10.
Similarly, a proof of the formula
$
\forall n\forall m\forall p(
\hbox{\sl lt}\xspace~n~{\bf 10}\supset \hbox{\sl lt}\xspace~m~{\bf 10}\supset
\hbox{\sl plus}\xspace~n~m~p\supset \hbox{\sl plus}\xspace~m~n~p)
$
exists and consists of enumerating 100 pairs of numbers $\bar{t}up{n,m}$
and checking that the result of adding $n+m$ yields the same value as
adding $m+n$.
The full proof system for \ensuremath{\mu\bar{t}ext{MALL}}eq contains the cut rule and the
following two initial rules.
\[
\infer[\mu\,init]{\seqx{\mu{}B\bar{t}}{\mu{}B\bar{t}}}{}
\qquad
\infer[\nu\,init]{\seqx{\nu{}B\bar{t}}{\nu{}B\bar{t}}}{}
\]
The more general instance of the initial rule can be eliminated in
favor of these two specific instances.
\section{Conclusions}
Linear logic is usually understood as being an intensional logic whose
semantic treatments are quite remote from the simple model theory
consideration of first-order logic and arithmetic.
Thus, we draw the possibly surprising conclusions that the proof
theory of linear logic provides a suitable framework for model checking.
Many of the salient features of linear logic---lack of structural
rules, two conjunctions and two disjunctions, polarization---play
important roles in this framework.
The role of linear logic here seems completely different and removed
from, say, the use of linear logic to model multiset rewriting and
Petri nets \cite{kanovich95apal}:
we use it instead as \emph{the logic behind logic}.
In order to capture model checking, we need to deal with possibly
unbounded behaviors in specifications.
Instead of using the rule of contraction (which states, for example,
that the hypothesis $B$ can be repeated as the two hypotheses $B, B$)
we have used the theory of fixed points: there, unfolding replaces
$\mu B \bar{t}$ with $(B (\mu B) \bar{t})$, thus copying the definition of $B$.
The use of fixed points also allows for the direct and natural
applications of the induction and coinduction principles.
In the full version of this paper, we show how a focused proof system
for \ensuremath{\mu\bar{t}ext{MALL}}eq can be used to describe large scale (synthetic) additive
inference rules that are built from smaller scale inference rules that
may be multiplicative.
There can be several benefits for establishing and developing model
checking within proof theory.
One way to integrate theorem provers and model checkers would be to
allow them to exchange proof certificates in a common language of
formulas and proofs. The logic of \ensuremath{\mu\bar{t}ext{MALL}}eq is close to the logic and
proofs used in some inductive theorem provers.
Also, linear logic is rich in duality. Certain techniques used in
model checking topics should be expected to dualize well. For
example, what is the dual notion for least fixed points of the
notion of bisimulation-up-to? What does predicate abstraction look
like when applied to greatest fixed points?
Proof theory is a framework that supports rich abstractions, including
term-level abstractions, such as bindings in terms. Thus, moving from
model checking using first-order terms to using simply typed
$\lambda$-terms is natural in proof theory: such proof
theoretic investigations of model checking over linguistic structures
including binders have been studied in \cite{miller05tocl} and have
been implemented in the Bedwyr system \cite{baelde07cade} which has
been applied to various model checking problems related to the
$\pi$-calculus \cite{tiu05fguc,tiu10tocl}.
\noindent{\bf Acknowledgments.} We thank the reviewers of an earlier
draft of this abstract for their comments. This work has been funded
by the ERC Advanced Grant Proof\kern 0.8pt Cert.
\end{document}
|
math
|
Inspired by the colors of Miami’s Wynwood art district and the flavors of the world, Marine Harvest created Rebel Fish – not fish as you know it.
Rebel Fish is a new “Superfish” company that delivers salmon so fresh, it is sold in the fish section. By pairing the freshest fish with bold flavors from global cuisine, Rebel Fish wanted to change the perception of how millennials think about fish. This is where Jastor comes in.
Fish done differently. Rebel Fish is crazy-easy to prepare and off-the-hook delicious fish. In an ever-competing market, our branding sets us apart.
Our team identified shifting trends in food and grocery are now leaning more towards meal prep and the rise of the need for quick nutritious meals for the everyday rush.
To establish Rebel Fish’s straightforward uniqueness, we developed a brand that features colorful explosions of bold flavors within a modern and revealing packaging system. Each splash and texture was inspired by the nonstop hustle of our generation and our wanderlust to explore new flavors and destinations (including those found right here in our Wynwood home).
The thin lines and curves of our Logotype and typography, along with revamping the fish icon featured throughout our design, reinforce the friendliness and approachability of the brand.
|
english
|
'use strict';
import BaseStore from './BaseStore';
/**
* Snackbar notifies messages to screen
*/
class SnackbarStore extends BaseStore {
constructor() {
super();
}
notifyError(err) {
this.setState({ error: err });
}
notifyInfo(message) {
this.setState({ info: message });
}
setAuthFail(err) {
this.error(err);
}
}
export default new SnackbarStore();
|
code
|
\begin{document}
\title{Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees}
\doparttoc
\faketableofcontents
\begin{abstract}
We study the problem of representation learning in stochastic contextual linear bandits. While the primary concern in this domain is usually to find \textit{realizable} representations (i.e., those that allow predicting the reward function at any context-action pair exactly), it has been recently shown that representations with certain spectral properties (called \textit{HLS}) may be more effective for the exploration-exploitation task, enabling \textit{LinUCB} to achieve constant (i.e., horizon-independent) regret. In this paper, we propose \textsc{BanditSRL}, a representation learning algorithm that combines a novel constrained optimization problem to learn a realizable representation with good spectral properties with a generalized likelihood ratio test to exploit the recovered representation and avoid excessive exploration. We prove that \textsc{BanditSRL} can be paired with any no-regret algorithm and achieve constant regret whenever an \textit{HLS} representation is available. Furthermore, \textsc{BanditSRL} can be easily combined with deep neural networks and we show how regularizing towards \textit{HLS} representations is beneficial in standard benchmarks.
\end{abstract}
\section{Introduction}\label{sec:introduction}
The contextual bandit is a general framework to formalize the exploration-exploitation dilemma arising in sequential decision-making problems such as recommendation systems, online advertising, and clinical trials~\citep[e.g.,][]{bouneffouf2019survey}. When solving real-world problems, where contexts and actions are complex and high-dimensional (e.g., users' social graph, items' visual description), it is crucial to provide the bandit algorithm with a suitable representation of the context-action space. While several representation learning algorithms have been proposed in supervised learning and obtained impressing empirical results~\citep[e.g.,][]{oord2018representation,EricssonGLH22}, how to \textit{efficiently} learn representations that are effective for the exploration-exploitation problem is still relatively an open question.
The primary objective in representation learning is to find features that map the context-action space into a lower-dimensional embedding that allows fitting the reward function accurately, i.e., \textit{realizable} representations~\citep[e.g.,][]{AgarwalDKLS12,Agarwal2014taming,RiquelmeTS18,Foster2020beyond,foster2019nested,Lattimore2020good,SimchiLevi2020falcon}. Within the space of realizable representations, bandit algorithms leveraging features of smaller dimension are expected to learn faster and thus have smaller regret. Nonetheless, Papini et al.~\citep{PapiniTRLP21hlscontextual} have recently shown that, even among realizable features, certain representations are naturally better suited to solve the exploration-exploitation problem. In particular, they proved that \textsc{LinUCB}\xspace{}~\citep{ChuLRS11,Abbasi-YadkoriPS11} can achieve constant regret when provided with a ``good'' representation. Interestingly, this property is not related to ``global'' characteristics of the feature map (e.g., dimension, norms), but rather on a spectral property of the representation (the space associated to optimal actions should cover the context-action space, see \textsc{HLS}\xspace{} property in Def.~\ref{ref:hls}). This naturally raises the question whether it is possible to learn such representation at the same time as solving the contextual bandit problem. Papini et al.~\citep{PapiniTRLP21hlscontextual} provided a first positive answer with the \textsc{Leader}\xspace algorithm, which is proved to perform as well as the best realizable representation in a given set up to a logarithmic factor in the number of representations. While this allows constant regret when a realizable \textsc{HLS}\xspace representation is available, the algorithm suffers from two main limitations: \textbf{1)} it is entangled with \textsc{LinUCB}\xspace and it can hardly be generalized to other bandit algorithms; \textbf{2)} it learns a different representation for each context-action pair, thus making it hard to extend beyond finite representations to arbitrary functional space (e.g., deep neural networks).
In this paper, we address those limitations through \textsc{BanditSRL}\xspace{}, a novel algorithm that decouples representation learning and exploration-exploitation so as to work with any no-regret contextual bandit algorithm and to be easily extended to general representation spaces. \textsc{BanditSRL}\xspace{} combines two components: 1) a representation learning mechanism based on a constrained optimization problem that promotes ``good'' representations while preserving realizability; and 2) a generalized likelihood ratio test (GLRT) to avoid over exploration and fully exploit the properties of ``good'' representations. The main contributions of the paper can be summarized as follows:
\begin{enumerate}[leftmargin=20pt]
\item We show that adding a GLRT on the top of any no-regret algorithm enables it to exploit the properties of a \textsc{HLS}\xspace representation and achieve constant regret. This generalizes the constant regret result for \textsc{LinUCB}\xspace in~\citep{PapiniTRLP21hlscontextual} to any no-regret algorithm.
\item Similarly, we show that \textsc{BanditSRL}\xspace{} can be paired with any no-regret algorithm and perform effective representation selection, including achieving constant regret whenever a \textsc{HLS}\xspace representation is available in a given set. This generalizes the result of \textsc{Leader}\xspace beyond \textsc{LinUCB}\xspace. In doing this we also improve the analysis of the misspecified case and prove a tighter bound on the time to converge to realizable representations. Furthermore, numerical simulations in synthetic problems confirm that \textsc{BanditSRL}\xspace{} is empirically competitive with \textsc{Leader}\xspace.
\item Finally, in contrast to \textsc{Leader}\xspace, \textsc{BanditSRL}\xspace{} can be easily scaled to complex problems where representations are encoded through deep neural networks. In particular, we show that the Lagrangian relaxation of the constrained optimization problem for representation learning becomes a regression problem with an auxiliary representation loss promoting \textsc{HLS}\xspace-like representations. We test different variants of the resulting \mathrm{d}epalgo{} algorithm showing how the auxiliary representation loss improves performance in a number of dataset-based benchmarks.
\end{enumerate}
\section{Preliminaries}\label{sec:preliminaries}
We consider a stochastic contextual bandit problem with context space $\mathcal{X}$ and finite action set $\mathcal{A}$. At each round $t\geq1$, the learner observes a context $x_t$ sampled i.i.d.\ from a distribution $\rho$ over $\mathcal{X}$, selects an action $a_t \in \mathcal{A}$, and receives a reward $y_t = \mu(x_t,a_t) + \eta_t$ where $\eta_t$ is a zero-mean noise and $\mu:\mathcal{X}\times\mathcal{A}\rightarrow \mathbb{R}$ is the expected reward.
The objective of a learner $\mathfrak{A}$ is to minimize its pseudo-regret $R_T := \sum_{t=1}^T \big (\mu^\star(x_t) -\mu(x_t,a_t) \big)$ for any $T \geq 1$, where $\mu^\star(x_t) := \max_{a\in\mathcal{A}} \mu(x_t,a)$. We assume that for any $x \in \mathcal{X}$ the optimal action $a^\star_x := \operatornamewithlimits{argmax}_{a\in\mathcal{A}} \mu(x,a)$ is unique and we define the gap $\Delta(x,a) := \mu^\star(x) - \mu(x,a)$. We say that $\mathfrak{A}$ is a no-regret algorithm if, for any instance of $\mu$, it achieves sublinear regret, i.e., $R_T = o(T)$.
We consider the problem of representation learning in given a candidate function space $\Phi \subseteq \big\{ \phi : \mathcal{X} \times \mathcal{A} \to \mathbb{R}^{d_{\phi}}\big\}$, where the dimensionality $d_\phi$ may depend on the feature $\phi$. Let $\theta_\phi^\star = \operatornamewithlimits{argmin}_{\theta \in \mathbb{R}^{d_\phi}} \mathbb{E}_{x \sim \rho}\big[ \sum_a (\phi(x,a)^\mathsf{T} \theta - \mu(x,a))^2 \big]$ be the best linear fit of $\mu$ for representation $\phi$.
We assume that $\Phi$ contains a linearly realizable representation.
\begin{assumption}[Realizability]\label{asm:set.contains.realizable.phi}
There exists an (unknown) subset $\Phi^\star\subseteq\Phi$ such that, for each $\phi\in\Phi^\star$, $\mu(x,a) = \phi(x,a)^\mathsf{T} \theta^\star_\phi, \forall x\in\mathcal{X},a\in\mathcal{A}$.
\end{assumption}
\begin{assumption}[Regularity]\label{asm:boundedness}
Let $\mathcal{B}_\phi := \{\theta\in\mathbb{R}^{d_\phi} : \|\theta\|_2 \leq B_\phi\}$ be a ball in $\mathbb{R}^{d_\phi}$. We assume that, for each $\phi\in\Phi$, $\sup_{x,a}\|\phi(x,a)\|_2 \leq L_\phi$, $\|\theta_\phi^\star\|_2 \leq B_\phi$, $\sup_{x,a}|\phi(x,a)^\mathsf{T}\theta| \leq 1$ for any $\theta \in \mathcal{B}_\phi$ and $|y_t| \leq 1$ almost surely for all $t$. We assume parameters $L_\phi$ and $B_\phi$ are known. We also assume the minimum gap $\Delta = \inf_{x\in \mathcal{X}: \rho(x) > 0, a \in \mathcal{A}, \Delta(x,a)>0} \{\Delta(x,a)\} > 0$ and that
$\lambda_{\min} \Big(\frac{1}{|\mathcal{A}|} \sum_{a} \mathbb{E}_{x \sim \rho} [\phi(x,a)\phi(x,a)^\mathsf{T}] \Big) >0$ for any $\phi \in \Phi^\star$, i.e, all realizable representations are non-redundant.
\end{assumption}
Under Asm.~\ref{asm:set.contains.realizable.phi}, when $|\Phi|=1$, the problem reduces to a stochastic linear contextual bandit and can be solved using standard algorithms, such as \textsc{LinUCB}\xspace/\textsc{OFUL}\xspace{}~\citep{ChuLRS11,Abbasi-YadkoriPS11}, LinTS~\citep{AbeilleL17}, and $\epsilon$-greedy~\citep{lattimore2020bandit}, which enjoy sublinear regret and, in some cases, logarithmic problem-dependent regret.
Recently, Papini et al.~\citep{PapiniTRLP21hlscontextual} showed that \textsc{LinUCB}\xspace only suffers constant regret when a \emph{realizable} representation is \textsc{HLS}\xspace, i.e., when the features of optimal actions span the entire $d_\phi$-dimensional space. \textsc{HLS}\xspace{}
\begin{definition}[\textsc{HLS}\xspace{} Representation]\label{ref:hls}
A representation $\phi$ is \textsc{HLS}\xspace{} (the acronym refers to the last names of the authors of~\citep{hao2020adaptive}) if
\begin{equation*}
\lambda^\star(\phi) := \lambda_{\min}\left( \mathbb{E}_{x \sim \rho} \left[ \phi(x, a^\star_x) \phi(x, a^\star_x)^\mathsf{T} \right] \right) > 0
\end{equation*}
where $\lambda_{\min}(A)$ denotes the minimum eigenvalue of a matrix $A$.
\end{definition}
Papini et al. showed that \textsc{HLS}\xspace, together with realizability, is a sufficient and necessary property for achieving constant regret in contextual stochastic linear bandits for non-redundant representations.
In order to deal with the general case where $\Phi$ may contain non-realizable representations, we rely on the following misspecification assumption from~\citep{PapiniTRLP21hlscontextual}.
\begin{assumption}[Misspecification]\label{asm:icml.misspecification}
For each $\phi \notin \Phi^\star$, there exists $\epsilon_\phi > 0$ such that
\begin{align*}
\min_{\theta \in \mathcal{B}_\phi} \min_{\pi : \mathcal{X} \to \mathcal{A}} \mathbb{E}_{x\sim\rho}\left[\left(\phi(x,\pi(x))^\mathsf{T} \theta - \mu(x,\pi(x))\right)^2 \right] \geq \epsilon_\phi.
\end{align*}
\end{assumption}
This assumption states that any non-realizable representation has a minimum level of misspecification on average over contexts and for any context-action policy. In the finite-context case, a sufficient condition for Asm.~\ref{asm:icml.misspecification} is that, for each $\phi\notin\Phi^\star$, there exists a context $x\in\mathcal{X}$ with $\rho(x)>0$ such that $\phi(x,a)^\mathsf{T}\theta \neq \mu(x,a)$ for all $a\in\mathcal{A}$ and $\theta\in\mathcal{B}_\phi$.
\textbf{Related work.}
Several papers have focused on contextual bandits with an arbitrary function space to estimate the reward function under realizability assumptions~\citep[e.g.,][]{AgarwalDKLS12,Agarwal2014taming,Foster2020beyond}. While these works consider a similar setting to ours, they do not aim to learn ``good'' representations, but rather focus on the exploration-exploitation problem to obtain sublinear regret guarantees. This often corresponds to recovering the maximum likelihood representation, which may not lead to the best regret.
After the work in~\citep{PapiniTRLP21hlscontextual}, the problem of representation learning with constant regret guarantees has also been studied in reinforcement learning~\citep{PapiniTPRLP21unisoftmdp,zhang2021lowrankunisoft}. As these approaches build on the ideas in~\citep{PapiniTRLP21hlscontextual}, they inherit the same limitations as~\citep{PapiniTRLP21hlscontextual}.
Another related literature is the one of expert learning and model selection in bandits~\citep[e.g.,][]{auer2002nonstochastic,maillardM11,agarwal2017corral,abbasiyadkori2020regret,pacchiano2020stochcorral,lee2020online,CutkoskyDDGPP21}, where the objective is to select the best candidate among a set of base learning algorithms or experts.
While these algorithms are general and can be applied to different settings, including representation learning with a finite set of candidates, they may not be able to effectively leverage the specific structure of the problem. Furthermore, at the best of our knowledge, these algorithms suffers a polynomial dependence in the number of base algorithms ($|\Phi|$ in our setting) and are limited to worst-case regret guarantees. Whether the $\sqrt{T}$ or $\mathrm{poly}(|\Phi|)$ dependency can be improved in general is an open question (see ~\citep{CutkoskyDDGPP21} and ~\citep[][App. A]{PapiniTRLP21hlscontextual}). Finally, \citep{foster2019nested,ghosh2021problem} studied the specific problem of model selection with nested linear representations, where the best representation is the one with the smallest dimension for which the reward is realizable.
Several works have recently focused on theoretical and practical investigation of contextual bandits with neural networks (NNs)~\citep{Zhou2020neural,xu2020neuralcb,Deshmukh2020vision}. While their focus was on leveraging the representation power of NNs to correctly predict the rewards, here we focus on learning representations with good spectral properties through a novel auxiliary loss. A related approach to our is~\citep{Deshmukh2020vision} where the authors leverage self-supervised auxiliary losses for representation learning in image-based bandit problems.
\section{A General Framework for Representation Learning}\label{sec:replearn.algo}
\begin{algorithm}[t]
\caption{\textsc{BanditSRL}\xspace}\label{alg:replearnin.icml.asm}
\begin{algorithmic}[1]
\STATE \textbf{Input:} representations $\Phi$, no-regret algorithm $\mathfrak{A}$, confidence $\mathrm{d}lta \in (0,1)$, update schedule $\gamma > 1$
\STATE Initialize $j=0$, $\phi_j, \theta_{\phi_j,0}$ arbitrarily, $V_0(\phi_j) = \lambda I_{d_{\phi_j}}$, $t_j = 1$, let $\mathrm{d}lta_j := \mathrm{d}lta / (2(j+1)^2)$
\FOR{$t = 1, \ldots$}
\STATE Observe context $x_t$
\IF{$\mathrm{GLR_{t-1}(x_t;\phi_j)} > \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi_{j})$}
\STATE Play $a_t = \operatornamewithlimits{argmax}_{a\in\mathcal{A}} \big\{ \phi_{j}(x_t,a)^\mathsf{T} \theta_{\phi_j,t-1} \big\}$ and observe reward $y_t$
\ELSE
\STATE Play $a_t = \mathfrak{A}_t\big(x_t;\phi_{j}, \mathrm{d}lta_j/|\Phi|\big)$, observe reward $y_t$, and feed it into $\mathfrak{A}$
\ENDIF
\IF{$t = \lceil \gamma t_j \rceil$ \textbf{and} $|\Phi|>1$}
\STATE Set $j = j +1$ and $t_j = t$
\STATE Compute $\phi_{j} = \operatornamewithlimits{argmin}_{\phi\in\Phi_t} \big\{ \mathcal{L}_t(\phi) \big\}$ and reset $\mathfrak{A}$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We introduce \textsc{BanditSRL}\xspace{} (\emph{Bandit Spectral Representation Learner}), an algorithm for stochastic contextual linear bandit that efficiently decouples representation learning from exploration-exploitation. As illustrated in Alg.~\ref{alg:replearnin.icml.asm}, \textsc{BanditSRL}\xspace{} has access to a fixed-representation contextual bandit algorithm $\mathfrak{A}$, the \textit{base algorithm}, and it is built around two key mechanisms: \ding{182} a constrained optimization problem where the objective is to minimize a representation loss $\mathcal{L}$ to favor representations with $\textsc{HLS}\xspace{}$ properties, whereas the constraint ensures realizability; \ding{183} a generalized likelihood ratio test (GLRT) to ensure that, if a \textsc{HLS}\xspace{} representation is learned, the base algorithm $\mathfrak{A}$ does not over-explore and the ``good'' representation is exploited to obtain constant regret.
\textbf{Mechanism \ding{182} (line 12).} The first challenge when provided with a generic set $\Phi$ is to ensure that the algorithm does not converge to selecting misspecified representations, which may lead to linear regret. This is achieved by introducing a hard constraint in the representation optimization, so that \textsc{BanditSRL}\xspace{} only selects representations in the set (see also~\citep[][App. F]{PapiniTRLP21hlscontextual}),
\begin{equation}
\Phi_t := \left\{ \phi\in\Phi : \min_{\theta\in\mathcal{B}_\phi}E_{t}(\phi, \theta) \leq \min_{\phi'\in\Phi}\min_{\theta\in\mathcal{B}_{\phi'}} \big\{ E_{t}(\phi',\theta) + \alpha_{t,\mathrm{d}lta}(\phi') \big\} \right\}
\end{equation}
where $E_t(\phi,\theta) := \frac{1}{t} \sum_{s=1}^t \left(\phi(x_s,a_s)^T \theta - y_s\right)^2$ is the empirical mean-square error (MSE) of model $(\phi,\theta)$ and $\alpha_{t,\mathrm{d}lta}(\phi) := \frac{40}{t}\log \left(\frac{8|\Phi|^2(12L_\phi B_\phi t)^{d_\phi}t^3}{\mathrm{d}lta} \right)+ \frac{2}{t}$.
This condition leverages the existence of a realizable representation in $\Phi_t$ to eliminate representations whose MSE is not compatible with the one of the realizable representation, once accounted for the statistical uncertainty (i.e., $\alpha_{t,\mathrm{d}lta}(\phi)$).
Subject to the realizability constraint, the representation loss $\mathcal{L}_t(\phi)$ favours learning a \textsc{HLS}\xspace{} representation (if possible).
As illustrated in Def.~\ref{ref:hls}, a \textsc{HLS}\xspace{} representation is such that the expected design matrix associated to the optimal actions has a positive minimum eigenvalue. Unfortunately it is not possible to directly optimize for this condition, since we have access to neither the context distribution $\rho$ nor the optimal action in each context. Nonetheless, we can design a loss that works as a proxy for the \textsc{HLS}\xspace{} property whenever $\mathfrak{A}$ is a no-regret algorithm. Let $V_{t}(\phi) = \lambda I_{d_\phi} + \sum_{s=1}^{t} \phi(x_s,a_s)\phi(x_s,a_s)^\mathsf{T}$ be the empirical design matrix built on the context-actions pairs observed up to time $t$, then we define $\mathcal{L}_{\mathrm{eig},t}(\phi):= -\lambda_{\min} \big( V_{t}(\phi) -\lambda I_{d_\phi} \big) / L_\phi^2$, where the normalization factor ensures invariance w.r.t.\ the feature norm.
Intuitively, the empirical distribution of contexts $(x_t)_{t\ge 1}$ converges to $\rho$ and the frequency of optimal actions selected by a no-regret algorithm increases over time, thus ensuring that $V_{t}(\phi) / t$ tends to behave as the design matrix under optimal arms $\mathbb{E}_{x\sim \rho} [\phi(x,a^\star_x)\phi(x,a^\star_x)^\mathsf{T}]$.
As discussed in Sect.~\ref{sec:exp.and.practical.algo} alternative losses can be used to favour learning \textsc{HLS}\xspace{} representations.
\textbf{Mechanism \ding{183} (line 5).}
While Papini et al.~\citep{PapiniTRLP21hlscontextual} proved that \textsc{LinUCB}\xspace is able to exploit HLS representations, other algorithms such as $\epsilon$-greedy may keep forcing exploration and do not fully take advantage of HLS properties, thus failing to achieve constant regret. In order to prevent this, we introduce a \emph{generalized likelihood ratio} test (GLRT). At each round $t$, let $\phi_{t-1}$ be the representation used at time $t$, then \textsc{BanditSRL}\xspace{} decides whether to act according to the base algorithm $\mathfrak{A}$ with representation $\phi_{t-1}$ or fully exploit the learned representation and play greedily w.r.t.\ it. Denote by $\theta_{\phi,t-1} = V_{t-1}(\phi)^{-1} \sum_{s=1}^{t-1} \phi(x_s,a_s) y_s$ the regularized least-squares parameter at time $t$ for representation $\phi$ and by $\pi^\star_{t-1}(x;\phi) = \operatornamewithlimits{argmax}_{a \in\mathcal{A}} \big\{ \phi(x,a)^\mathsf{T} \theta_{\phi,t-1} \big\}$ the associated greedy policy. Then, \textsc{BanditSRL}\xspace{} selects the greedy action $\pi^\star_{t-1}(x_t;\phi_{t-1})$ when the GLR test is active, otherwise it selects the action proposed by the base algorithm $\mathfrak{A}$.
Formally, for any $\phi\in\Phi$ and $x\in\mathcal{X}$, we define the generalized likelihood ratio as
\begin{align}\label{eq:glrt.main.paper}
\mathrm{GLR}_{t-1}(x;\phi) := \min_{a\neq \pi^\star_{t-1}(x;\phi)} \frac{\big(\phi(x,\pi^\star_{t-1}(x;\phi)) - \phi(x,a)\big)^\mathsf{T}\theta_{\phi,t-1}}{\|\phi(x, \pi^\star_{t-1}(x;\phi)) - \phi(s,a)\|_{V_{t-1}(\phi)^{-1}}}
\end{align}
and, given $\beta_{t-1,\mathrm{d}lta}(\phi)=\sigma\sqrt{2\log(1/\mathrm{d}lta)+d_{\phi}\log(1+(t-1)L_{\phi}^2/(\lambda d_{\phi}))} + \sqrt{\lambda}B_{\phi}$, the GLR test is $\mathrm{GLR}_{t-1}(x;\phi) > \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi)$ \citep{hao2020adaptive,tirinzoni2020asymptotically,degenne2020gamification}.
If this happens at time $t$ and $\phi_{t-1}$ is realizable, then we have enough confidence to conclude that the greedy action is optimal, i.e., $\pi^\star_{t-1}(x_t;\phi_{t-1})=a^\star_{x_t}$.
An important aspect of this test is that it is run on the current context $x_t$ and it does not require evaluating global properties of the representation. While at any time $t$ it is possible that a non-HLS non-realizable representation may pass the test, the GLRT is sound as \textbf{1)} exploration through $\mathfrak{A}$ and the representation learning mechanism work in synergy to guarantee that \textit{eventually} a realizable representation is always provided to the GLRT; \textbf{2)} only \textsc{HLS}\xspace{} representations are guaranteed to consistently trigger the test at any context $x$.
In practice, \textsc{BanditSRL}\xspace{} does not update the representation at each step but in phases.
This is necessary to avoid too frequent representation changes and control the regret, but also to make the algorithm more computationally efficient and practical.
Indeed, updating the representation may be computationally expensive in practice (e.g., retraining a NN) and a phased scheme with $\gamma$ parameter reduces the number of representation learning steps to $J \approx \lceil \log_{\gamma}(T) \rceil$.
The algorithm $\mathfrak{A}$ is reset at the beginning of a phase $j$ when the representation is selected and it is run on the samples collected during the current phase when the base algorithm is selected. If $\mathfrak{A}$ is able to leverage off-policy data, at the beginning of a phase $j$, we can warm-start it by providing $\phi_j$ and all the past data $(x_s,a_s,y_s)_{s \leq t_j}$. While the reset is necessary for dealing with \textit{any} no-regret algorithm, it can be removed for algorithms such as \textsc{LinUCB}\xspace and $\epsilon$-greedy without affecting the theoretical guarantees.
\textbf{Comparison to \textsc{Leader}\xspace.} We first recall the basic structure of \textsc{Leader}\xspace. Denote by $\mathrm{UCB}_t(x, a, \phi)$ the upper-confidence bound computed by \textsc{LinUCB}\xspace{} for the context-action pair $(x,a)$ and representation $\phi$ after $t$ steps. Then \textsc{Leader}\xspace{} selects the action
$a_t \in \operatornamewithlimits{argmax}_{a\in\mathcal{A}} \min_{\phi \in \Phi_t} \mathrm{UCB}_t(x_t, a, \phi)$.
Unlike the constrained optimization problem in \textsc{BanditSRL}\xspace{}, this mechanism couples representation learning and exploration-exploitation and it requires optimizing a representation for the current $x_t$ and for each action $a$. Indeed, \textsc{Leader}\xspace does not output a single representation and possibly chooses different representations for each context-action pair.
While this enables \textsc{Leader}\xspace to mix representations and achieve constant regret in some cases even when $\Phi$ does not include any \textsc{HLS}\xspace{} representation, it leads to two major drawbacks: \textbf{1)} the representation selection is directly entangled with the \textsc{LinUCB}\xspace exploration-exploitation strategy, \textbf{2)} it is impractical in problems where $\Phi$ is an infinite functional space (e.g., a deep neural network). The mechanisms \ding{182} and \ding{183} successfully address these limitations and enable \textsc{BanditSRL}\xspace{} to be paired with any no-regret algorithm and to be scaled to any representation class as illustrated in the next section.
\subsection{Extension to Neural Networks}
We now consider a representation space $\Phi$ defined by the last layer of a NN. We denote by $\phi : \mathcal{X} \times \mathcal{A} \to \mathbb{R}^d$ the last layer and by $f(x,a) = \phi(x,a)^\mathsf{T} \theta$ the full NN, where $\theta$ are the last-layer weights.
We show how \textsc{BanditSRL}\xspace{} can be easily adapted to work with deep neural networks (NN).
\textit{First}, the GLRT requires only to have access to the current context $x_t$ and representation $\phi_{j}$, i.e., the features defined by the last layer of the current network, and its cost is linear in the number of actions. \textit{Second}, the phased scheme allows lazy updates, where we retrain the network only $\log_{\gamma}(T)$ times. \textit{Third}, we can run any bandit algorithm with a representation provided by the NN, including \textsc{LinUCB}\xspace, LinTS, and $\epsilon$-greedy. \textit{Fourth}, the representation learning step can be adapted to allow efficient optimization of a NN. We consider a regularized problem obtained through an approximation of the constrained problem:
\begin{align}\label{eq:reg.problem}
& \operatornamewithlimits{argmin}_{\phi} \left\{ \mathcal{L}_t(\phi) - c_{\mathrm{reg}} \left( \min_{\phi',\theta'} \big\{ E_{t}(\phi',\theta') + \alpha_{t,\mathrm{d}lta}(\phi') \big\}- \min_{\theta}E_{t}(\phi, \theta) \right) \right\}\nonumber\\
& = \operatornamewithlimits{argmin}_{\phi} \min_{\theta} \left\{ \mathcal{L}_t(\phi) + c_{\mathrm{reg}}\, E_{t}(\phi, \theta) \right\}.
\end{align}
where $c_{\mathrm{reg}} \geq 0$ is a tunable parameter. The fact we consider $c_{\mathrm{reg}}$ constant allows us to ignore terms that do not depend on either $\phi$ or $\theta$.
This leads to a convenient regularized loss that aims to minimize the MSE (second term) while enforcing some spectral property on the last layer of the NN (first term). In practice, we can optimize this loss by stochastic gradient descent over a \textit{replay buffer} containing the samples observed over time. The resulting algorithm, called \mathrm{d}epalgo, is a direct and elegant generalization of the theoretically-grounded algorithm.
While in theory we can optimize the regularized loss~\eqref{eq:reg.problem} with all the samples, in practice it is important to better control the sample distribution. As the algorithm progresses, we expect the replay buffer to contain an increasing number of samples obtained by optimal actions, which may lead the representation to solely fit optimal actions while increasing misspecification on suboptimal actions. This may compromise the behavior of the algorithm and ultimately lead to high regret. This is an instance of \emph{catastrophic forgetting} induced by a biased/shifting sample distribution~\citep[e.g.,][]{Goodfellow2013forgetting}.
To prevent this phenomenon, we store two replay buffers: \textit{i)} an explorative buffer $\mathcal{D}_{\mathfrak{A},t}$ with samples obtained when $\mathfrak{A}$ was selected; \textit{ii)} an exploitative buffer $\mathcal{D}_{\mathrm{glrt},t}$ with samples obtained when GLRT triggered and greedy actions were selected.
The explorative buffer $\mathcal{D}_{\mathfrak{A},t}$ is used to compute the MSE $E_t(\phi,\theta)$. While this reduces the number of samples, it improves the robustness of the algorithm by promoting realizability. On the other hand, we use all the samples $\mathcal{D}_t = \mathcal{D}_{\mathfrak{A},t} \cup \mathcal{D}_{\mathrm{glrt},t}$ for the representation loss $\mathcal{L}(\phi)$. This is coherent with the intuition that mechanism \ding{182} works when the design matrix $V_t$ drifts towards the design matrix of optimal actions, which is at the core of the \textsc{HLS}\xspace property.
Refer to App.~\ref{app:algo.variations} for a more detailed description of \mathrm{d}epalgo.
\section{Theoretical Guarantees}\label{sec:theory.res}
In this section, we provide a complete characterization of the theoretical guarantees of \textsc{BanditSRL}\xspace when $\Phi$ is a finite set of representations, i.e., $|\Phi|<\infty$. We consider the update scheme with $\gamma=2$.
\subsection{Constant Regret Bound for \textsc{HLS}\xspace Representations}
We first study the case where a realizable \textsc{HLS}\xspace{} representation is available. For the characterization of the behavior of the algorithm, we need to introduce the following times:
\begin{itemize}[leftmargin=20pt]
\item $\tau_{\mathrm{elim}}$: an upper-bound to the time at which all non-realizable representations are eliminated, i.e., for all $t \geq \tau_{\mathrm{elim}}$, $\Phi_t = \Phi^\star$;
\item $\tau_{\textsc{HLS}\xspace}$: an upper-bound to the time (if it exists) after which the \textsc{HLS}\xspace{} representation is selected, i.e., $\phi_t = \phi^\star$ for all $t\geq \tau_{\textsc{HLS}\xspace}$, where $\phi^\star \in \Phi^\star$ is the unique \textsc{HLS}\xspace{} realizable representation;
\item $\tau_{\mathrm{glrt}}$: an upper-bound to the time (if it exists) such that the GLR test triggers for the \textsc{HLS}\xspace{} representation $\phi^\star$ for all $t \geq \tau_{\mathrm{glrt}}$.
\end{itemize}
We begin by deriving a constant problem-dependent regret bound for \textsc{BanditSRL}\xspace{} with \textsc{HLS}\xspace{} representations. The proof and explicit values of the constants are reported in App.~\ref{app:analysis}.\footnote{While Thm.~\ref{th:icmlams.regret.lambda_min.hls} provides high-probability guarantees, we can easily derive a constant expected-regret bound by running \textsc{BanditSRL}\xspace{} with a decreasing schedule for $\mathrm{d}lta$ and with a slightly different proof.}
\begin{theorem}\label{th:icmlams.regret.lambda_min.hls}
Let $\mathfrak{A}$ be any no-regret algorithm for stochastic contextual linear bandits, $\Phi$ satisfy Asm.~\ref{asm:set.contains.realizable.phi}-~\ref{asm:icml.misspecification}, $|\Phi| < \infty$, $\gamma=2$, and $\mathcal{L}_t(\phi)= \mathcal{L}_{\mathrm{eig},t}(\phi) :=-\lambda_{\min}(V_t(\phi) - \lambda I_{d_{\phi}})/L_{\phi}^2$. Moreover, let $\Phi^\star$ contains a unique \textsc{HLS}\xspace{} representation $\phi^\star$. Then, for any $\mathrm{d}lta \in (0,1)$ and $T\in\mathbb{N}$, the regret of \textsc{BanditSRL}\xspace{} is bounded, with probability at least $1-4\mathrm{d}lta$, as\footnote{We denote by $a \wedge b$ (resp. $a\vee b$) the minimum (resp. the maximum) between $a$ and $b$.}
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(
(\tau_{\mathrm{opt}} - \tau_{\mathrm{elim}}) \wedge T
, \phi, \mathrm{d}lta_{\log_2(\tau_{\mathrm{opt}} \wedge T)}/|\Phi|) \log_{2}(\tau_{\mathrm{opt}} \wedge T),
\end{align*}
where $\mathrm{d}lta_j := \mathrm{d}lta / (2(j+1)^2)$ and
\begin{align}\label{eq:tau.opt.main}
\tau_{\mathrm{opt}} = \tau_{\mathrm{glrt}} \vee \tau_{\textsc{HLS}\xspace} \vee \tau_{\mathrm{elim}} \lesssim \tau_{\mathrm{alg}} + \frac{L_{\phi^\star}^2\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)} \left( \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)} + \frac{d_{\phi^\star}}{\Delta^2} + \frac{d}{(\min_{\phi\notin\Phi^\star}\epsilon_\phi)\Delta} \right),
\end{align}
with $\tau_{\mathrm{alg}}$ a \emph{finite} (independent from the horizon $T$) constant depending on algorithm $\mathfrak{A}$ (see Tab.~\ref{tab:meta.regret.bounds}) and $\wb{R}_{\mathfrak{A}}(\tau, \phi, \mathrm{d}lta)$ an anytime bound (non-decreasing in $\tau$ and $1/\mathrm{d}lta$) on the regret accumulated over $\tau$ steps by $\mathfrak{A}$ using representation $\phi$ and confidence level $\mathrm{d}lta$.
\end{theorem}
The key finding of the previous result is that \textsc{BanditSRL}\xspace{} achieves constant regret whenever a realizable \textsc{HLS}\xspace{} representation is available in the set $\Phi$, which may contain non-realizable as well as realizable non-\textsc{HLS}\xspace{} representations. The regret bound above also illustrates the ``dynamics'' of the algorithm and three main regimes. In the early stages, non-realizable representations may be included in $\Phi_t$, which may lead to suffering linear regret until time $\tau_{\mathrm{elim}}$ when the constraint in the representation learning step filters out all non-realizable representations (first term in the regret bound). At this point, \textsc{BanditSRL}\xspace{} leverages the loss $\mathcal{L}$ to favor \textsc{HLS}\xspace representations and the base algorithm $\mathfrak{A}$ to perform effective exploration-exploitation. This leads to the second term in the bound, which corresponds to an upper-bound to the sum of the regrets of $\mathfrak{A}$ in each phase in between $\tau_{\mathrm{elim}}$ and $\tau_{\mathrm{glrt}} \vee \tau_{\textsc{HLS}\xspace}$, which is roughly $\sum_{j_{\tau_{\mathrm{elim}}} < j < j_{\tau_\mathrm{opt}}} \wb{R}_{\mathfrak{A}}(t_{j+1}-t_j, \phi_j) \leq \max_{\phi\in\Phi^\star}\wb{R}_{\mathfrak{A}}(\tau_{\mathrm{opt}} - \tau_{\mathrm{elim}}, \phi) \log_2(\tau_{\mathrm{opt}})$.
In this second regime, in some phases the algorithm may still select non-\textsc{HLS}\xspace{} representations, which leads to a worst-case bound over all realizable representations in $\Phi^\star$. Finally, after $\tau_{\mathrm{glrt}} \vee \tau_{\textsc{HLS}\xspace}$ the GLRT consistently triggers over time. During this last regime, \textsc{BanditSRL}\xspace{} has reached enough accuracy and confidence so that the greedy policy of the \textsc{HLS}\xspace representation is indeed optimal and no additional regret is incurred.
We notice that the only dependency on the number of representations $|\Phi|$ in Thm.~\ref{th:icmlams.regret.lambda_min.hls} is due to the rescaling of the confidence level $\mathrm{d}lta \mapsto \mathrm{d}lta/|\Phi|$. Since standard algorithms have a logarithmic dependence in $1/\mathrm{d}lta$, this only leads to a logarithmic dependency in $|\Phi|$.
On the other hand, due to the resets, \textsc{BanditSRL}\xspace{} has an extra logarithmic factor in the effective regret horizon $\tau_{\mathrm{opt}}$.
\textbf{Single \textsc{HLS}\xspace{} representation.} A noteworthy consequence of Thm.~\ref{th:icmlams.regret.lambda_min.hls} is that any no-regret algorithm equipped with GLRT achieves constant regret when provided with a realizable \textsc{HLS}\xspace{} representation.
\begin{corollary}\label{cor:single-repr}
Let $\Phi = \Phi^\star = \{\phi^\star\}$ and $\phi^\star$ is \textsc{HLS}\xspace. Then, $\tau_{\mathrm{elim}} = \tau_{\textsc{HLS}\xspace} =0$ and, with probability at least $1-4\mathrm{d}lta$, \textsc{BanditSRL}\xspace{} suffers constant regret:
$R_T \leq \wb{R}_{\mathfrak{A}}(\tau_{\mathrm{glrt}} \wedge T, \phi^\star, \mathrm{d}lta)$.
\end{corollary}
This corollary also illustrates that the performance of $\mathfrak{A}$ is not affected when $\phi^\star$ is non-\textsc{HLS}\xspace (i.e., $\tau_{\mathrm{glrt}} = \infty$), as \textsc{BanditSRL}\xspace{} achieves the same regret of the base algorithm. Note that there is no additional logarithmic factor in this case since we do not need any reset for representation learning.
\subsection{Additional Results}
\textbf{No \textsc{HLS}\xspace representation.}
A consequence of Thm.~\ref{th:icmlams.regret.lambda_min.hls} is that when $|\Phi|>1$ but no realizable \textsc{HLS}\xspace{} exists ($\tau_{\mathrm{glrt}}=\infty$), \textsc{BanditSRL}\xspace{} still enjoys a sublinear regret.
\begin{corollary}[Regret bound without \textsc{HLS}\xspace{} representation]\label{th:icmlams.regret.lambda_min.nohls}
Consider the same setting in Thm.~\ref{th:icmlams.regret.lambda_min.hls} and assume that $\Phi^\star$ does not contain any $\textsc{HLS}\xspace{}$ representation. Then, for any $\mathrm{d}lta \in (0,1)$ and $T\in\mathbb{N}$, the regret of \textsc{BanditSRL}\xspace{} is bounded, with probability at least $1-4\mathrm{d}lta$, as follows:
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|) \log_{2}(T).
\end{align*}
\end{corollary}
This shows that the regret of \textsc{BanditSRL}\xspace{} is of the same order as the base no-regret algorithm $\mathfrak{A}$ when running with the worst realizable representation.
While such worst-case dependency is undesirable, it is common to many representation learning algorithms, both in bandits and reinforcement learning~\citep[e.g.][]{AgarwalDKLS12,zhang2022repblockmdp}.\footnote{Notice that the worst-representation dependency is often hidden in the definition of $\Phi$, which is assumed to contain features with fixed dimension and bounded norm, i.e., $\Phi = \{\phi:\mathcal{X} \times \mathcal{A} \to \mathbb{R}^d, \sup_{x,a}\|\phi(x,a)\|_2 \leq L\}$. As $d$ and $B$ are often the only representation-dependent terms in the regret bound $\wb{R}_{\mathfrak{A}}$, no worst-representation dependency is reported.} In App.~\ref{app:algo.variations}, we show that an alternative representation loss could address this problem and lead to a bound scaling with the regret of the \textit{best} realizable representation ($R_T \leq 2\tau_{\mathrm{elim}} + \min_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta/|\Phi|) \log_{2}(T)$), while preserving the guarantees for the HLS case. Since the representation loss requires an upper-bound on the number of suboptimal actions and a carefully tuned schedule for guessing the gap $\Delta$, it is less practical than the smallest eigenvalue, which we use as the basis for our practical version of \textsc{BanditSRL}\xspace{}.
\begin{table}
\centering \small
\begin{tabular}{ccc}
\hline
Algorithm & $\wb{R}_{\mathfrak{A}}(T,\phi, \mathrm{d}lta/|\Phi|)$ & $\tau_{\mathrm{alg}}$ \\
\hline
\textsc{LinUCB}\xspace & $d_\phi^2\log(|\Phi|T/\mathrm{d}lta)^2/\Delta$ & $\frac{L_{\phi^\star}^2 d^2 \log(|\Phi|/\mathrm{d}lta)^2}{\lambda^\star(\phi^\star)\Delta^2}$ \\
$\epsilon$-greedy with $\epsilon_t = t^{-1/3}$ & $\sqrt{d_\phi |\mathcal{A}|} \log(|\Phi|/\mathrm{d}lta) T^{2/3}$ & $\frac{L_{\phi^\star}^6 (d|\mathcal{A}|)^{3/2} L^3 \log(|\Phi|/\mathrm{d}lta)^3}{\lambda^\star(\phi^\star)^3\Delta^3}$\\
\hline
\end{tabular}
\caption{\small Specific regret bounds when using \textsc{LinUCB}\xspace or $\epsilon$-greedy as base algorithms. We omit numerical constants and logarithmic factors.}
\label{tab:meta.regret.bounds}
\end{table}
\textbf{Algorithm-dependent instances and comparison to \textsc{Leader}\xspace.}
Table~\ref{tab:meta.regret.bounds} reports the regret bound of \textsc{BanditSRL}\xspace{} for different base algorithms. These results make explicit the dependence in the number of representations $|\Phi|$ and show that the cost of representation learning is only logarithmic.
In the specific case of \textsc{LinUCB}\xspace for \textsc{HLS}\xspace representations, we highlight that the upper-bound to the time $\tau_{\mathrm{opt}}$ in Thm.~\ref{th:icmlams.regret.lambda_min.hls} improves over the result of \textsc{Leader}\xspace. While \textsc{Leader}\xspace has no explicit concept of $\tau_{\mathrm{alg}}$, a term with the same dependence of $\tau_{\mathrm{alg}}$ in Tab.~\ref{tab:meta.regret.bounds} appears also in the \textsc{Leader}\xspace analysis. This term encodes an upper bound to the pulls of suboptimal actions and depends on the \textsc{LinUCB}\xspace strategy. As a result, the first three terms in Eq.~\ref{eq:tau.opt.main} are equivalent to the ones of \textsc{Leader}\xspace.
The improvement comes from the last term ($\tau_{\mathrm{elim}}$), where, thanks to a refined analysis of the elimination condition, we are able to improve the dependence on the inverse minimum misspecification ($1/\min_{\phi\notin\Phi^\star} \epsilon_{\phi}$) from quadratic to linear (see App.~\ref{app:analysis} for a detailed comparison).
On the other hand, \textsc{BanditSRL}\xspace{} suffers from the worst regret among realizable representations, whereas \textsc{Leader}\xspace scales with the \textit{best} representation. As discussed above, this mismatch can be mitigated by using by a different choice of representation loss. In the case of $\epsilon$-greedy, the $T^{2/3}$ regret upper-bound induces a worse $\tau_{\mathrm{alg}}$ due to a larger number of suboptimal pulls. This in turns reflects into a higher regret to the constant regime.
Finally, \textsc{Leader}\xspace is still guaranteed to achieve constant regret by selecting different representations at different context-action pairs whenever non-\textsc{HLS}\xspace representations satisfy a certain mixing condition~\citep[cf.][Sec. 5.2]{PapiniTRLP21hlscontextual}. This result is not possible with \textsc{BanditSRL}\xspace{}, where one representation is selected in each phase. At the same time, it is the single-representation structure of \textsc{BanditSRL}\xspace{} that allows us to accommodate different base algorithms and scale it to any representation space.
\section{Experiments}\label{sec:exp.and.practical.algo}
We provide an empirical validation of \textsc{BanditSRL}\xspace{} both in synthetic contextual linear bandit problems and in non-linear contextual problems~\citep[see e.g.,][]{RiquelmeTS18,Zhou2020neural}.
\textbf{Linear Benchmarks.}
We first evaluate \textsc{BanditSRL}\xspace{} on synthetic linear problems to empirically validate our theoretical findings. In particular, we test \textsc{BanditSRL}\xspace{} with different base algorithms and representation learning losses and we compare it with \textsc{Leader}\xspace.\footnote{We do not report the performance of model selection algorithms. An extensive analysis can be found in~\citep{PapiniTRLP21hlscontextual}, where the author showed that \textsc{Leader}\xspace was outperforming all the baselines.}
We consider the ``varying dimension'' problem introduced in~\citep{PapiniTRLP21hlscontextual} which consists of six realizable representations with dimension from $2$ to $6$. Of the two representations of dimension $d = 6$, one is \textsc{HLS}\xspace. In addition seven misspecified representations are available. Details are provided in App.~\ref{app:experiments}. We consider \textsc{LinUCB}\xspace and $\epsilon$-greedy as base algorithms and we use the theoretical parameters, but we perform warm start using all the past data when a new representation is selected. Similarly, for \textsc{BanditSRL}\xspace{} we use the theoretical parameters ($\gamma=2$) and $\mathcal{L}_t(\phi) := \mathcal{L}_{\mathrm{eig},t}(\phi)$.
Fig.~\ref{fig:vardim} shows that, as expected, \textsc{BanditSRL}\xspace{} with both base algorithms is able to achieve constant regret when a \textsc{HLS}\xspace{} representation exists. As expected from the theoretical analysis, $\epsilon$-greedy leads to a higher regret than \textsc{LinUCB}\xspace.
Furthermore, empirically \textsc{BanditSRL}\xspace{} with \textsc{LinUCB}\xspace{} obtains a performance that is comparable with the one of \textsc{Leader}\xspace{} both with and without realizable \textsc{HLS}\xspace{} representation. Note that when no \textsc{HLS}\xspace{} exists, the regret of \textsc{BanditSRL}\xspace{} with $\epsilon$-greedy is $T^{2/3}$, while \textsc{LinUCB}\xspace-based algorithms are able to achieve $\log(T)$ regret. When $\Phi$ contains misspecified representations (Fig.~\ref{fig:vardim}(center-left)), we can observe that in the first regime $[1,\tau_{\mathrm{elim}}]$ the algorithm suffers linear regret, after that we have the regime of the base algorithm ($[\tau_{\mathrm{elim}},\tau_{\mathrm{glrt}}\vee \tau_{\mathrm{\textsc{HLS}\xspace}}]$) up to the point where the GLRT leads to select only optimal actions.
\textit{Weak HLS.} Papini et al.~\citep{PapiniTRLP21hlscontextual} showed that when realizable representations are redundant (i.e., $\lambda^\star(\phi^\star) = 0$), it is still possible to achieve constant regret if the representation is ``weakly''-\textsc{HLS}\xspace, i.e., the features of the optimal actions span the features $\phi(x,a)$ associated to any context-action pair, but not necessarily $\mathbb{R}^{d_\phi}$. To test this case, we pad a 5-dimensional vector of ones to all the features of the six realizable representations in the previous experiment. To deal with the weak-\textsc{HLS}\xspace{} condition, we introduce the alternative representation loss $\mathcal{L}_{\mathrm{weak},t}(\phi) = -\min_{s\leq t} \big\{\phi(x_s,a_s)^\mathsf{T} (V_t(\phi) - \lambda I_{d_{\phi}}) \phi(x_s,a_s) / L_{\phi}^2 \big\}$. Since, $V_t(\phi) - \lambda I_{d_{\phi}}$ tends to behave as $\mathbb{E}_{x}[\phi^\star(x)\phi^\star(x)^\mathsf{T}]$, this loss encourages representations where all the observed features are spanned by the optimal arms, thus promoting weak-\textsc{HLS}\xspace{} representations (see App.~\ref{app:algo.variations} for more details). As expected, Fig.~\ref{fig:vardim}(right) shows that the min-eigenvalue loss $\mathcal{L}_{\mathrm{eig},t}$ fails in identifying the correct representation in this domain. On the other hand, \textsc{BanditSRL}\xspace{} with the novel loss is able to achieve constant regret and converge to constant regret (we cut the figure for readability), and behaves as \textsc{Leader}\xspace{} when using \textsc{LinUCB}\xspace.
\begin{figure}
\caption{\small
Varying dimension experiment with all realizable representations (left), misspecified representations (center-left), realizable non-\textsc{HLS}
\label{fig:vardim}
\end{figure}
\begin{figure}
\caption{ Average cumulative regret (over $20$ runs) in non-linear domains.}
\label{fig:dataset}
\end{figure}
\textbf{Non-Linear Benchmarks.}
We study the performance of \mathrm{d}epalgo{} in classical benchmarks where non-linear representations are required.
We only consider the weak-\textsc{HLS}\xspace{} loss $\mathcal{L}_{\mathrm{weak},t}(\phi)$ as it is more general than full \textsc{HLS}\xspace{}. As base algorithms we consider $\epsilon$-greedy and inverse gap weighting (IGW) with $\epsilon_t = t^{-1/3}$, and \textsc{LinUCB}\xspace and \textsc{LinTS} with theoretical parameters. These algorithms are run on the representation $\phi_j$ provided by the NN at each phase $j$.
We compare \mathrm{d}epalgo{} against the base algorithms using the maximum-likelihood representation (i.e., Neural-($\epsilon$-greedy, \textsc{LinTS})~\citep{RiquelmeTS18} and Neural-\textsc{LinUCB}\xspace~\citep{xu2020neuralcb}), supervised learning with the IGW strategy~\citep[e.g.,][]{Foster2020beyond,SimchiLevi2020falcon} and NeuralUCB~\citep{Zhou2020neural}\footnote{For ease of comparison, all the algorithms use the same phased schema for fitting the reward and recomputing the parameters. NeuralUCB uses a diagonal approximation of the design matrix.}
See App.~\ref{app:algo.variations}-\ref{app:experiments} for details.
In all the problems\footnote{The dataset-based problems --statlog, magic, covertype, mushroom~\citep{Blackard1998cover,Bock2004telescope,schlimmer1987concept,Dua:2019}-- are obtained from the standard multiclass-to-bandit conversion~\citep{RiquelmeTS18,Zhou2020neural}. See appendix~\ref{app:experiments} for details.} the reward function is highly non-linear w.r.t.\ contexts and actions and we use a network composed by layers of dimension $[50,50,50,50,10]$ and ReLu activation to learn the representation (i.e., $d=10$).
Fig.~\ref{fig:dataset} shows that all the base algorithms ($\epsilon$-\textsc{greedy}, \textsc{IGW}, \textsc{LinUCB}\xspace, \textsc{LinTS}) achieve better performance through representation learning, outperforming the base algorithms.
This provides evidence that \mathrm{d}epalgo{} is effective even beyond the theoretical scenario.
For the baseline algorithms (\textsc{NeuralUCB}, \textsc{IGW}) we report the regret of the best configuration on each individual dataset, while for \mathrm{d}epalgo{} we fix the parameters across datasets (i.e., $\alpha_{\mathrm{GLRT}}=5$). While this comparison clearly favours the baselines, it also shows that \mathrm{d}epalgo{} is a robust algorithm that behaves better or on par with the state-of-the-art algorithms. In particular, \mathrm{d}epalgo{} uses theoretical parameters while the baselines use tuned configurations. Optimizing the parameters of \mathrm{d}epalgo{} is outside the scope of these experiments.
\section{Conclusion}
We proposed a novel algorithm, \textsc{BanditSRL}\xspace{}, for representation selection in stochastic contextual linear bandits. \textsc{BanditSRL}\xspace{} combines a mechanism for representation learning that aims to recover representations with good spectral properties, with a generalized likelihood ratio test to exploit the recovered representation. We proved that, thanks to these mechanisms, \textsc{BanditSRL}\xspace{} is not only able to achieve sublinear regret with any no-regret algorithm $\mathfrak{A}$ but, when a \textsc{HLS}\xspace{} representation exists, it is able to achieve constant regret. We demonstrated that \textsc{BanditSRL}\xspace{} can be implemented using NNs and showed its effectiveness in standard benchmarks.
A direction for future investigation is to extend the approach to a weaker misspecification assumption than Asm.~\ref{asm:icml.misspecification}. Another direction is to leverage the technical and algorithmic tools introduced in this paper for representation learning in reinforcement learning, e.g., in low-rank problems~\citep[e.g.][]{Agarwal2020flambe}.
\end{ack}
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{}
\item Did you include complete proofs of all theoretical results?
\answerYes{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerNo{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerNo{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNo{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\appendix
\part{Appendix}
\parttoc
\section{Notation}
\begin{table*}[h]
\centering
\begin{small}
\begin{tabular}{@{}ll@{}}
\toprule
Symbol & Meaning \\
\cmidrule{1-2}
$\mathcal{X}$ & Set of contexts\\
$\mathcal{A}$ & Finite set of arms\\
$\rho$ & Context distribution\\
$\mu : \mathcal{X} \times \mathcal{A} \rightarrow \mathbb{R}$ & Mean-reward function\\
$\Phi$ & Set of representations\\
$\Phi^\star$ & Subset of realizable representations\\
$\pi: \mathcal{X} \rightarrow \mathcal{A}$ & A policy\\
$\mathcal{F}_t$ & $\sigma$-algebra generated by $(x_1,a_1,y_1,\dots, x_{t},a_{t},y_{t})$ \\
$\mathfrak{A}_t : \mathcal{X} \rightarrow \mathcal{A}$ & Bandit algorithm (measurable mappings w.r.t. $\mathcal{F}_{t-1}$) \\
$ V_t(\phi) := \sum_{k=1}^{t} \phi(x_k,a_k) \phi(x_k,a_k)^\mathsf{T} + \lambda I_{d_{\phi}}$ & Design matrix for representation $\phi$\\
$\theta_{\phi,t} = V_t(\phi)^{-1} \sum_{k=1}^{t} \phi(x_k,a_k) r_k$ & Regularized least-square estimate for representation $\phi$\\
$\pi^\star_t(x;\phi) := \operatornamewithlimits{argmax}_{a\in\mathcal{A}} \phi(x,a)^\mathsf{T}\theta_{\phi,t}$ & Empirical optimal arm for context $x$ and representation $\phi$\\
$\Delta(x,a) = \max_{a'\in\mathcal{A}}\mu(x,a') - \mu(x,a)$ & Sub-optimality gap of arm $a$ in context $x$\\
$a^\star_x$ & Optimal arm for context $x$\\
$\pi^\star(x) = \operatornamewithlimits{argmax}_{a\in\mathcal{A}}\mu(x,a)$ & Optimal policy\\
$\lambda^\star(\phi) := \EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}]$ & Minimum eigenvalue on optimal arms\\
$E_t(\phi,\theta) := \frac{1}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} \theta - y_k\right)^2$ & Mean square error of model $(\phi,\theta)$ at time $t$\\
$\mathbb{E}_t$ and $\mathbb{V}_t$ & Expectation and variance conditioned on $\mathcal{F}_{t-1}$\\
$P_t(\phi,\theta) := \sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^\mathsf{T} \theta - \mu(x_k,a_k)\right)^2\right]$ & Sum of mean prediction errors of model $(\phi,\theta)$\\
$\alpha_{t,\mathrm{d}lta}(\phi) := \frac{40}{t}\log\frac{8|\Phi|^2(12L_\phi B_\phi t)^{d_\phi}t^3}{\mathrm{d}lta} + \frac{2}{t}$ & Threshold for MSE elimination\\
$D_t(\phi) := 160 d_\phi \log (12L_\phi B_\phi t)$ & Dimension factor for representation $\phi$\\
$R_T := \sum_{t=1}^T \Delta(x_t,a_t)$ & Pseudo-regret\\
$t_j := 2^j$ & Time at which the $(j+1)$-th phase ends (with $t_0 := 0$)\\
$N_{j}(T) := \sum_{t=t_j+1}^{T} \indi{G_t}$ & Number of calls to $\mathfrak{A}$ in phase $j$ up to time $T\leq t_{j+1}$\\
$G_t := \{ \mathrm{GLR_{t-1}(x_t;\phi_{t-1})} \leq \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi_{t-1}) \}$ & Event under which the GLRT does not trigger at time $t$\\
$S_T := \sum_{t=1}^T \indi{a_t \neq \pi^\star(x_t)}$ & Total number of sub-optimal pulls at time $T$\\
$\wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta)$ & Regret bound of algorithm $\mathfrak{A}$ over $T$ steps when using $\phi$\\
$g_T(\Phi,\Delta, \mathrm{d}lta)$ & Bound on the sub-optimal pulls of $\mathfrak{A}$ (see Th. \ref{lem:suboptimal-pulls-strong-missp})\\
$\mathrm{d}lta_j := \mathrm{d}lta / (2 (j+1)^2)$ & Confidence level for the base algorithm\\
\bottomrule
\end{tabular}
\end{small}
\caption{The notation adopted in this paper.}
\label{tab:notation}
\end{table*}
\section{Analysis of \textsc{BanditSRL}\xspace}\label{app:analysis}
\subsection{Assumptions}
The analysis works under the assumptions stated in Section \ref{sec:preliminaries} and for any no-regret base algorithm $\mathfrak{A}$. Here we formally state the conditions required on the
\begin{assumption}[No-regret algorithm]\label{asm:no-regret-algo}
For any $\phi\in\Phi^\star$ and $\mathrm{d}lta\in(0,1)$, if we run algorithm $\mathfrak{A}$ with representation $\phi$ and confidence $\mathrm{d}lta$, with probability at least $1-\mathrm{d}lta$ we have, for any $T\in\mathbb{N}$,
\begin{align*}
\sum_{t=1}^T \Delta(x_t,\mathfrak{A}_t(x_t; \phi,\mathrm{d}lta)) \leq \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta),
\end{align*}
where $\mathfrak{A}_t(x; \phi,\mathrm{d}lta)$ denotes the policy played by $\mathfrak{A}$ at time $t$ when instantiated with representation $\phi$ and confidence $\mathrm{d}lta$, while the function $\wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta)$ is sub-linear and non-decreasing in $T$ and logarithmic and non-decreasing in $1/\mathrm{d}lta$.
\end{assumption}
\subsection{Controlling the MSE}
The following is an extension of Lemma 4.1 in \citep{AgarwalDKLS12} and Lemma 20 in \citep{PapiniTRLP21hlscontextual}. Differently from their results, which relate the empirical MSE of any model $(\phi,\theta)$ with that of a realizable model, we also include the sum of conditional mean prediction errors $P_t(\phi,\theta) := \sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^\mathsf{T} \theta - \mu(x_k,a_k)\right)^2\right]$, which roughly quantifies the misspecification of model $(\phi,\theta)$. This shall be crucial for improving the elimination times of misspecified representations later.
\begin{lemma}\label{lemma:mse-single}
Let $\phi\in\Phi,\theta\in\mathbb{R}^{d_\phi}$. Take any realizable representation $\phi^\star\in\Phi^\star$ and let $\theta^\star := \theta^\star_{\phi^\star}$. Then, for each $t\geq 1$ and $\mathrm{d}lta \in (0,1)$,
\begin{align}
\mathbb{P}\left( E_t(\phi^\star, \theta^\star) > E_t(\phi,\theta) + \frac{40}{t}\log\frac{4t}{\mathrm{d}lta} - \frac{P_t(\phi,\theta)}{2t}\right) \leq \mathrm{d}lta.
\end{align}
\end{lemma}
\begin{proof}
Define $Z_k := (\phi(x_k,a_k)^T\theta - y_k)^2 - (\phi^\star(x_k,a_k)^T\theta^\star - y_k)^2$. Note that, since $|\phi(x_k,a_k)^T\theta| \leq 1$, $|\phi^\star(x_k,a_k)^T\theta^\star| \leq 1$, and $|y_k|\leq 1$, we have $|Z_k|\leq 4$. Thus, $(\mathbb{E}_k[Z_k] - Z_k)_{k\geq 1}$ is a martingale difference sequence bounded by $8$ in absolute value. Then, using Freedman's inequality (Lemma \ref{lemma:freedman}), with probability at least $1-\mathrm{d}lta$, for any $t$,
\begin{align*}
\sum_{k=1}^t \mathbb{E}_k[Z_k] - \sum_{k=1}^t Z_k \leq 2\sqrt{\sum_{k=1}^t \mathbb{V}_k[Z_k]\log\frac{4t}{\mathrm{d}lta}} + 32\log \frac{4t}{\mathrm{d}lta}.
\end{align*}
Using Lemma 4.2 in \citep{AgarwalDKLS12}, we have that $\mathbb{V}_k[Z_k] \leq 4 \mathbb{E}_k[Z_k]$. Solving the resulting inequality in $\sum_{k=1}^t \mathbb{E}_k[Z_k] $ and using $(x+y)^2 \leq 2x^2+2y^2$,
\begin{align*}
\sum_{k=1}^t \mathbb{E}_k[Z_k] \leq \left( 2\sqrt{\log\frac{4t}{\mathrm{d}lta}} + \sqrt{36\log\frac{4t}{\mathrm{d}lta} + \sum_{k=1}^t Z_k} \right)^2 \leq 80\log\frac{4t}{\mathrm{d}lta} + 2\sum_{k=1}^t Z_k.
\end{align*}
The proof is concluded by using $\sum_{k=1}^t Z_k = t(E_t(\phi,\theta) - E_t(\phi^\star, \theta^\star))$ and $\sum_{k=1}^t \mathbb{E}_k[Z_k] = P_t(\phi,\theta)$.
\end{proof}
\begin{lemma}\label{lemma:mse-multi}
For each $\mathrm{d}lta \in (0,1)$,
\begin{align*}
\mathbb{P}\left(\exists t\geq 1,\phi\in\Phi, \phi^\star\in\Phi^\star, \theta\in\mathcal{B}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,\theta) - \frac{P_t(\phi,\theta)}{4t} + \alpha_{t,\mathrm{d}lta}(\phi) \right) \leq \mathrm{d}lta.
\end{align*}
\end{lemma}
\begin{proof}
We shall use a covering argument for each representation $\phi\in\Phi$. First note that, for any $\xi >0$, there always exists a finite set $\mathcal{C}_\phi \subset \mathbb{R}^{d_\phi}$ of size at most $(3B_\phi/\xi)^{d_\phi}$ such that, for each $\theta \in \mathcal{B}_\phi$, there exists ${\theta'}\in\mathcal{C}_\phi$ with $\|\theta-{\theta'}\|_2 \leq \xi$ (see e.g. Lemma 20.1 in \citep{lattimore2020bandit}). Moreover, suppose that all vectors in $\mathcal{C}_\phi$ have $\ell_2$-norm bounded by $B_\phi$ (otherwise we can always remove vectors with large norm). Now take any two vectors $\theta,{\theta'}\in\mathcal{B}_\phi$ with $\|\theta-{\theta'}\|_2 \leq \xi$. We have
\begin{align*}
E_t(\phi,\theta) &= \frac{1}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} \theta \pm \phi(x_k,a_k)^\mathsf{T} {\theta}' - y_k\right)^2
\\ &= \frac{1}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} (\theta - {\theta}')\right)^2 + \frac{1}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} {\theta}' - y_k\right)^2
\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad + \frac{2}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} (\theta - {\theta}')\right)\left(\phi(x_k,a_k)^\mathsf{T} {\theta}' - y_k\right)
\\ &\geq E_t(\phi,{\theta}') + \frac{2}{t} \sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T} (\theta - {\theta}')\right)\underbrace{\left(\phi(x_k,a_k)^\mathsf{T} {\theta}' - y_k\right)}_{|\cdot|\leq 2}
\\ &\geq E_t(\phi,{\theta}') - \frac{4}{t} \sum_{k=1}^t \|\phi(x_k,a_k)\|_{2} \|\theta - {\theta}'\|_2 \geq E_t(\phi,{\theta}') - 4L_\phi\xi.
\end{align*}
Similarly, one can prove that
\begin{align*}
P_t(\phi,\theta) &= \sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^\mathsf{T} \theta - \mu(x_k,a_k)\right)^2\right]
\\ &\leq 2\sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^\mathsf{T} \theta - \phi(x_k,a_k)^\mathsf{T} \theta'\right)^2\right] + 2\sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^T \theta' - \mu(x_k,a_k)\right)^2\right]
\\ &\leq 2P_t(\phi,\theta') + 2\sum_{k=1}^t \mathbb{E}_k\left[\|\phi(x_k,a_k)\|_{2}^2\right] \|\theta - {\theta}'\|_2^2 \leq 2P_t(\phi,\theta') + 2L_\phi^2 \xi^2t.
\end{align*}
Let us define a sequence of deterministic covers $(\mathcal{C}_{\phi,t})_{t\geq 1}$ such that $\mathcal{C}_{\phi,t}$ is a $\xi_t$-cover with $\xi_t = \frac{1}{4L_\phi t}$. Let $\mathrm{d}lta'_t = \frac{\mathrm{d}lta}{2|\Phi|^2(12L_\phi S_\phi t)^{d_\phi}}$ and note that $\alpha_{t,\mathrm{d}lta}(\phi) := \frac{40}{t}\log\frac{4t^3}{\mathrm{d}lta_t'} + \frac{2}{t}$. Then,
\begin{align*}
&\mathbb{P}\left(\exists t\geq 1, \phi\in\Phi, \phi^\star\in\Phi^\star, \theta\in\mathcal{B}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,\theta) - \frac{P_t(\phi,\theta)}{4t} + \frac{40}{t}\log\frac{4t^3}{\mathrm{d}lta'_t} + \frac{2}{t}\right)
\\ &\leq \sum_{t=1}^\infty\sum_{\phi\in\Phi}\sum_{\phi^\star\in\Phi^\star}\mathbb{P}\left(\exists \theta\in\mathcal{B}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,\theta) - \frac{P_t(\phi,\theta)}{4t} + \frac{40}{t}\log\frac{4t^3}{\mathrm{d}lta'_t} + \frac{2}{t}\right)
\\ &\leq \sum_{t=1}^\infty\sum_{\phi\in\Phi}\sum_{\phi^\star\in\Phi^\star}\mathbb{P}\left(\exists {\theta}'\in\mathcal{C}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,{\theta'}) - \frac{1}{t} - \frac{2P_t(\phi,\theta') + 1 / (8t)}{4t} + \frac{40}{t}\log\frac{4t^3}{\mathrm{d}lta'_t} + \frac{2}{t}\right)
\\ &\leq \sum_{t=1}^\infty\sum_{\phi\in\Phi}\sum_{\phi^\star\in\Phi^\star}\sum_{{\theta}'\in\mathcal{C}_\phi}\mathbb{P}\left( E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,{\theta}') - \frac{P_t(\phi,\theta')}{2t} + \frac{40}{t}\log\frac{4t^3}{\mathrm{d}lta'_t}\right)
\\ &\leq \sum_{t=1}^\infty\sum_{\phi\in\Phi}\sum_{\phi^\star\in\Phi^\star}\sum_{{\theta}'\in\mathcal{C}_\phi} \frac{\mathrm{d}lta'_t}{t^2}
\leq |\Phi|^2\sum_{t=1}^\infty \frac{\mathrm{d}lta'_t}{t^2}(12L_\phi B_\phi t)^{d_\phi} \leq \mathrm{d}lta.
\end{align*}
Here the first inequality is from the union bound, the second one follows by relating $\theta$ with its closest vector in the cover as above, the third one is from another union bound, the fourth one uses Lemma \ref{lemma:mse-single}, the fifth one is from the maximum size of the cover, and the last one uses the definition of $\mathrm{d}lta'_t$.
\end{proof}
\begin{corollary}\label{cor:mse-multi}
For each $\mathrm{d}lta \in (0,1)$,
\begin{align*}
\mathbb{P}\left(\exists t\geq 1,\phi\in\Phi, \phi^\star\in\Phi^\star, \theta\in\mathcal{B}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) > E_t(\phi,\theta) + \alpha_{t,\mathrm{d}lta}(\phi)\right) \leq \mathrm{d}lta.
\end{align*}
\end{corollary}
\begin{proof}
This is trivial from Lemma \ref{lemma:mse-multi} since $P_t(\phi,\theta) > 0$.
\end{proof}
\subsection{Decomposition into phases}\label{app:phase-decomp}
For $j \geq 1$, let $t_j = 2^j$ be the time at which the $(j+1)$-th phase ends (i.e., when the algorithm selects a new representation for the $(j+1)$-th time). Let $t_0 = 0$. Note that, on the interval $[t_j+1, t_{j+1}]$ the algorithm uses a fixed representation $\phi_{j}$ selected at time $t_j$. In the remaining, we shall overload the notation used in the main paper and denote all quantities with a time subscript. Therefore, for $t\in [t_j+1, t_{j+1}]$, $\phi_{t-1} = \phi_{t_j}$ denotes the representation used a time $t$, i.e., $\phi_j$.
Recall that $G_t$ denotes the event under which the GLRT does not trigger at round $t$ (i.e., the base algorithm is called). Then, for each $j \geq 0$, the quantity
\begin{align*}
\sum_{t=t_j+1}^{t_{j+1}} \indi{G_t} \Delta(x_t,a_t)
\end{align*}
denotes the regret suffered by the base algorithm in phase $j$.
\subsection{Good events}
We define the following events
\begin{align*}
\mathcal{E}_1 &= \left\{\forall t\in\mathbb{N},\phi\in\Phi^\star : \| {\theta}_{\phi,t} - \theta^\star_\phi \|_{V_{t}(\phi)} \leq \beta_{t,\mathrm{d}lta/|\Phi|}(\phi) \right\}, \\
\mathcal{E}_2 &= \Big\{ \forall t\in\mathbb{N},\phi\in\Phi : V_{t}(\phi) \succeq t\EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}]
\\ & \qquad\qquad\qquad\qquad\qquad\qquad + \left( \lambda - L_\phi^2 S_t - 8L_\phi^2\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \right) I_{d_\phi}\Big\},\\
\mathcal{E}_3 &= \Big\{ \forall t\in\mathbb{N},\phi\in\Phi : V_{t}(\phi) \preceq t\EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}]
\\ & \qquad\qquad\qquad\qquad\qquad\qquad + \left( \lambda + L_\phi^2 S_t + 8L_\phi^2\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \right) I_{d_\phi}\Big\},\\
\mathcal{E}_4 &= \left\{\forall t\in\mathbb{N},\phi\in\Phi, \phi^\star\in\Phi^\star, \theta\in\mathcal{B}_\phi : E_t(\phi^\star, \theta^\star_{\phi^\star}) \leq E_t(\phi,\theta) - \frac{P_t(\phi,\theta)}{4t} + \alpha_{t,\mathrm{d}lta}(\phi) \right\},\\
\mathcal{E}_5 &= \left\{\forall j\in\mathbb{N}, T\leq t_{j+1} : \sum_{t=t_j+1}^{T} \indi{G_t} \Delta(x_t,a_t) \leq \wb{R}_{\mathfrak{A}}\big( N_j(T), \phi_{t_j}, \mathrm{d}lta_j/|\Phi| \big)\right\},
\end{align*}
We define the good event $\mathcal{E} := \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4 \cap \mathcal{E}_5$.
\begin{lemma}[Good event]\label{lem:good-event-proba}
We have $\mathbb{P}(\mathcal{E}) \geq 1 - 4\mathrm{d}lta$.
\end{lemma}
\begin{proof}
By using Theorem 2 in \citep{Abbasi-YadkoriPS11} together with a union bound over $\Phi$, $\mathbb{P}(\mathcal{E}_1) \geq 1-\mathrm{d}lta$. Similarly, by Lemma \ref{lem:bound-design}, $\mathbb{P}(E_2 \cap E_3) \geq 1-\mathrm{d}lta$. Event $\mathcal{E}_4$ holds with probability at least $1-\mathrm{d}lta$ by Lemma \ref{lemma:mse-multi}.
We finally bound the probability of $\mathcal{E}_5$ failing. We have
\begin{align*}
\mathbb{P}(\neg \mathcal{E}_5) \leq \sum_{j\in\mathbb{N}} \mathbb{P}\left\{ \exists T\leq t_{j+1} : \sum_{t=t_j+1}^{T} \indi{G_t} \Delta(x_t,a_t) > \wb{R}_{\mathfrak{A}}\big( T, \phi_{t_j}, \mathrm{d}lta_j/|\Phi| \big)\right\} \leq \sum_{j\in\mathbb{N}} \mathrm{d}lta_j \leq \mathrm{d}lta,
\end{align*}
where the first inequality is from a union bound over $j$, the second holds from the anytime no-regret assumption (Assumption \ref{asm:no-regret-algo}) together with a union bound over $\Phi$, while the last one holds by definition of $\mathrm{d}lta_j$. A union bound over the 5 events proves the statement.
\end{proof}
\begin{lemma}\label{lemma:mse-correct}[Correctness of MSE elimination]
Under event $\mathcal{E}$, for each $t\geq 1$ any realizable representation $\phi^\star\in\Phi^\star$ satisfies the constraint, i.e., $\phi^\star\in\Phi_t$.
\end{lemma}
\begin{proof}
Under $\mathcal{E}_4$,
\begin{align*}
\min_{\theta\in\mathcal{B}_{\phi^\star}}E_t(\phi^\star, \theta) \leq E_t(\phi^\star, \theta^\star_{\phi^\star}) \leq \min_{\phi\in\Phi}\min_{\theta\in\mathcal{B}_\phi} \left(E_t(\phi,\theta) + \alpha_{t,\mathrm{d}lta}(\phi) \right).
\end{align*}
This implies the statement.
\end{proof}
\subsection{Generalized Likelihood Ratio Test}
For any $\phi\in\Phi$ and $x\in\mathcal{X}$, let us define the \emph{generalized likelihood ratio} as
\begin{align*}
\mathrm{GLR}_t(x;\phi) := \min_{a\neq \pi^\star_t(x;\phi)} \frac{\big(\phi(x,\pi^\star_t(x;\phi)) - \phi(x,a)\big)^\mathsf{T}\theta_{\phi,t}}{\|\phi(x, \pi^\star_t(x;\phi)) - \phi(s,a)\|_{V_{t}(\phi)^{-1}}}.
\end{align*}
It is known \citep[e.g.,][]{hao2020adaptive,tirinzoni2020asymptotically} that
\begin{align*}
\mathrm{GLR}_t(x;\phi) = \inf_{\theta \in \Lambda_t(x;\phi)} \| {\theta}_{\phi,t} - \theta \|_{V_{t}(\phi)},
\end{align*}
where $\Lambda_t(x;\phi) := \{\theta\in\mathbb{R}^{d_\phi} \mid \exists a \neq \pi^\star_t(x;\phi) : \phi(x,a)^\mathsf{T}\theta > \phi(x,\pi^\star_t(x;\phi))^\mathsf{T}\theta\}$ is the set of parameters for which the optimal arm in context $x$ is different from the one of $\theta_{\phi,t}$. In turns, the squared objective above is equivalent to
\begin{align*}
\frac{1}{2}\| {\theta}_{\phi,t} - \theta \|_{V_{t}(\phi)}^2 = \frac{1}{2}\sum_{k=1}^t \left(\phi(x_k,a_k)^\mathsf{T}\theta_{\phi,t} - \phi(x_k,a_k)^\mathsf{T}\theta\right)^2,
\end{align*}
which is equal to the expected (under the conditional reward distribution) log-likelihood ratio between the observations in the bandit model given by $(\phi,\theta_{\phi,t})$ and the one given by $(\phi,\theta)$ if these were Gaussians with unit variance. This is the reason why $\mathrm{GLR}_t(x;\phi)$ is called the generalized likelihood ratio between the bandit model $(\phi,\theta_{\phi,t})$ and \emph{any} other bandit model with a different optimal arm in context $x$. The generalized likelihood ratio test (GLRT) consists in checking whether
\begin{align*}
\mathrm{GLR_{t}(x;\phi)} > \beta_{t,\mathrm{d}lta}(\phi).
\end{align*}
When this happens, we have enough confidence to conclude that $\theta^\star_\phi \notin \Lambda_t(x;\phi)$, i.e., that $\pi^\star(x) = \pi^\star_t(x;\phi)$.
\textsc{BanditSRL}\xspace computes, at each step, the GLRT for the currently selected representation. We can easily prove that the test is \emph{correct} under the good event $\mathcal{E}$ if the selected representation is realizable.
\begin{lemma}[Correctness of GLRT]\label{lem:glrt-correct-multi}
Under the good event $\mathcal{E}$, for any time $t$, if $\mathrm{GLR_{t-1}}(x_t;\phi_{t-1}) > \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi_{t-1})$ and $\phi_{t-1}\in\Phi^\star$, then $\pi^\star(x_t) = \pi^\star_t(x_t;\phi_t)$.
\end{lemma}
\begin{proof}
By contradiction, suppose that the statement does not hold. This means that there exists a time $t$, realizable feature $\phi\in\Phi^\star$, and context $x$ such that $\pi^\star(x) \neq \pi^\star_t(x;\phi)$ while the test triggers for context $x$ and feature $\phi$. By definition, this implies that $\theta^\star_\phi \in \Lambda_t(x;\phi)$ since $\pi^\star$ is the greedy policy for the (realizable) model $(\phi,\theta^\star_\phi)$. Thus,
\begin{align*}
\beta_{t,\mathrm{d}lta/|\Phi|}(\phi) < \mathrm{GLR}_t(x;\phi) = \inf_{\theta \in \Lambda_t(x;\phi)} \| {\theta}_{\phi,t} - \theta \|_{V_{t}(\phi)} \leq \| {\theta}_{\phi,t} - \theta^\star_\phi \|_{V_{t}(\phi)} \leq \beta_{t,\mathrm{d}lta/|\Phi|}(\phi),
\end{align*}
where the last inequality is from event $\mathcal{E}_1$. This is clearly a contradiction.
\end{proof}
\subsection{Eliminating misspecified representations}
\begin{lemma}\label{lem:active-missp}
Let $\phi\in\Phi$ be any misspecified representation (i.e., $\phi\notin\Phi^\star$). Under event $\mathcal{E}$, if $\phi\in\Phi_t$ for some $t$, then
\begin{align*}
\min_{\theta\in\mathcal{B}_\phi} P_t(\phi,\theta) \leq D_t(\phi) + \min_{\phi^\star\in\Phi^\star} D_t(\phi^\star) + 328\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta},
\end{align*}
where $D_t(\phi) := 160 d_\phi \log (12L_\phi B_\phi t)$.
\end{lemma}
\begin{proof}
Recall that, from Lemma \ref{lemma:mse-correct}, under $\mathcal{E}$, any $\phi^\star\in\Phi^\star$ is always in $\Phi_t$. Take any arbitrary $\phi^\star\in\Phi^\star$ and let $\theta^\star := \theta^\star_{\phi^\star}$. Then, by definition of $\Phi_t$,
\begin{align*}
\min_{\theta\in\mathcal{B}_\phi}E_{t}(\phi, \theta) \leq \min_{\phi'\in\Phi}\min_{\theta\in\mathcal{B}_{\phi'}} \big\{ E_{t}(\phi',\theta) + \alpha_{t,\mathrm{d}lta}(\phi') \big\} \leq E_{t}(\phi^\star,\theta^\star) + \alpha_{t,\mathrm{d}lta}(\phi^\star).
\end{align*}
Similarly, under $\mathcal{E}_4$ we have that
\begin{align*}
E_t(\phi^\star, \theta^\star) \leq \min_{\theta\in\mathcal{B}_\phi} \left( E_t(\phi,\theta) - \frac{P_t(\phi,\theta)}{4t} \right) + \alpha_{t,\mathrm{d}lta}(\phi) \leq \min_{\theta\in\mathcal{B}_\phi} E_t(\phi,\theta) - \frac{\min_{\theta\in\mathcal{B}_\phi} P_t(\phi,\theta)}{4t} + \alpha_{t,\mathrm{d}lta}(\phi).
\end{align*}
Combining these two inequalities, we find that
\begin{align*}
\frac{\min_{\theta\in\mathcal{B}_\phi} P_t(\phi,\theta)}{4t} \leq \alpha_{t,\mathrm{d}lta}(\phi) + \alpha_{t,\mathrm{d}lta}(\phi^\star).
\end{align*}
Expanding the definition of $\alpha$, rearranging, and optimizing over $\phi^\star$,
\begin{align*}
\min_{\theta\in\mathcal{B}_\phi} P_t(\phi,\theta) \leq D_t(\phi) + \min_{\phi^\star\in\Phi^\star} D_t(\phi^\star) + 320\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta} + 16.
\end{align*}
The proof is concluded by noting that $\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta} \geq 2$ and, thus, $16 \leq 8 \log\frac{8|\Phi|^2t^3}{\mathrm{d}lta}$.
\end{proof}
\begin{lemma}[Elimination]\label{lem:elim-strong-missp}
Under event $\mathcal{E}$, we have $\Phi_t = \Phi^\star$ for all $t\geq \tau_{\mathrm{elim}}$, where
\begin{align*}
\tau_{\mathrm{elim}} := \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \max_{\phi\notin\Phi^\star}\frac{1}{\epsilon_\phi}\left( D_t(\phi) + \min_{\phi^\star\in\Phi^\star} D_t(\phi^\star) + 328\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta} \right) \right\}.
\end{align*}
Let $\tau_{\mathrm{elim}} = 0$ when $\Phi = \Phi^\star$.
\end{lemma}
\begin{proof}
Let $\pi_k$ be the policy played by the algorithm at round $k$. First note that,
\begin{align*}
\min_{\theta\in\mathcal{B}_\phi} P_t(\phi,\theta)
&= \min_{\theta\in\mathcal{B}_\phi} \sum_{k=1}^t \mathbb{E}_k\left[\left(\phi(x_k,a_k)^T \theta - \mu(x_k,a_k)\right)^2\right]
\\ &= \min_{\theta\in\mathcal{B}_\phi} \sum_{k=1}^t \mathbb{E}_{x\sim\rho}\left[\left(\phi(x,\pi_k(x))^T \theta - \mu(x,\pi_k(x))\right)^2\right]
\\ &\geq \min_{\theta\in\mathcal{B}_\phi} \sum_{k=1}^t \min_\pi\mathbb{E}_{x\sim\rho}\left[\left(\phi(x,\pi(x))^T \theta - \mu(x,\pi(x))\right)^2\right]
\\ &= t \min_{\theta\in\mathcal{B}_\phi}\min_\pi\mathbb{E}_{x\sim\rho}\left[\left(\phi(x,\pi(x))^T \theta - \mu(x,\pi(x))\right)^2\right] \geq t \epsilon_\phi.
\end{align*}
Then, under $\mathcal{E}$, from Lemma \ref{lem:active-missp}, if $\phi\in\Phi_t$ and $\phi\notin\Phi^\star$,
\begin{align*}
t \leq \frac{1}{\epsilon_\phi}\left( D_t(\phi) + \min_{\phi^\star\in\Phi^\star} D_t(\phi^\star) + 200\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta} \right).
\end{align*}
The result follows by finding the first time $t$ at which a representation update is performed (i.e., $t=2^j$ for some $j$) and the condition above is violated for all $\phi\notin\Phi^\star$.
\end{proof}
\subsection{Regret bound without HLS representations}
We first prove a general regret bound that holds for any realizable problem (in the sense of Assumption \ref{asm:set.contains.realizable.phi}) without requiring the presence of HLS representations.
\begin{theorem}\label{th:regret-strong-missp-nohls}
Under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), for any $T\in\mathbb{N}$, the regret of Algorithm \ref{alg:replearnin.icml.asm} with $\gamma=2$ and arbitrary loss $\mathcal{L}_t(\phi)$ can be bounded as
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T - \tau_{\mathrm{elim}}, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|) \log_2(T),
\end{align*}
where $\tau_{\mathrm{elim}}$ is defined in Lemma \ref{lem:elim-strong-missp}
\end{theorem}
\begin{proof}
Let $\bar{j}$ be such that $\tau_{\mathrm{elim}} = 2^{\bar{j}}$ (which exists by definition). Using the decomposition into phases of Appendix \ref{app:phase-decomp},
\begin{align*}
R_T := \sum_{t=1}^T \Delta(x_t,a_t) &= \sum_{j=0}^{\bar{j}-1} \sum_{t=t_j+1}^{t_{j+1} \wedge T} \Delta(x_t,a_t) + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \sum_{t=t_j+1}^{t_{j+1} \wedge T} \Delta(x_t,a_t)
\\ &\leq 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \sum_{t=t_j+1}^{t_{j+1} \wedge T} \Delta(x_t,a_t),
\end{align*}
where the second inequality holds by definition of $\bar{j}$ and because the rewards are bounded in $[-1,1]$. It only remains to bound the regret on phases after $\bar{j}$. By Lemma \ref{lem:elim-strong-missp}, we have $\phi_t\in\Phi^\star$ at all times in such phases.
Let $G_t := \{ \mathrm{GLR_{t-1}(x_t;\phi_{t-1})} \leq \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi_{t-1}) \}$ be the event under which the GLRT does not trigger at time $t$. For any $j \geq \bar{j}$,
\begin{align*}
\sum_{t=t_j+1}^{t_{j+1} \wedge T} \Delta(x_t,a_t) = \sum_{t=t_j+1}^{t_{j+1} \wedge T} \indi{G_t} \Delta(x_t,a_t) + \sum_{t=t_j+1}^{t_{j+1} \wedge T} \indi{\neg G_t} \Delta(x_t,a_t) = \sum_{t=t_j+1}^{t_{j+1} \wedge T} \indi{G_t} \Delta(x_t,a_t),
\end{align*}
where the last equality holds since, under $\mathcal{E}$, if $G_t$ does not hold, then the GLRT triggers, $a_t = \pi^\star_{t-1}(x_t;\phi_{t-1})$, and $\pi^\star_{t-1}(x_t;\phi_{t-1}) = \pi^\star(x_t)$ by Lemma \ref{lem:glrt-correct-multi}. Let $N_{j} := \sum_{t=t_j+1}^{t_{j+1} \wedge T} \indi{G_t}$ be the total number of times the base algorithm $\mathfrak{A}$ is called in phase $j$. By event $\mathcal{E}_5$, the regret of $\mathfrak{A}$ on such steps is bounded as
\begin{align*}
\sum_{t=t_j+1}^{t_{j+1} \wedge T} \indi{G_t} \Delta(x_t,a_t) \leq \wb{R}_{\mathfrak{A}}(N_j(t_{j+1} \wedge T), \phi_{t_j}, \mathrm{d}lta_j/|\Phi|).
\end{align*}
Note that, for all $j\geq\bar{j}$, $N_j(t_{j+1} \wedge T) \leq t_{j+1} \wedge T - t_j \leq T - t_{\bar{j}} = T - \tau_{\mathrm{elim}}$. Morevoer, the number of phases is $j \leq \log_2(T)$. Therefore, by the fact that $\wb{R}_{\mathfrak{A}}(\cdot, \phi, \cdot)$ is non-decreasing in the first and third argument,
\begin{align*}
R_T &\leq 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \wb{R}_{\mathfrak{A}}(T - \tau_{\mathrm{elim}}, \phi_{t_j}, \mathrm{d}lta_{\log_2(T)}/|\Phi|)
\\ &\leq 2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star}\wb{R}_{\mathfrak{A}}(T - \tau_{\mathrm{elim}}, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|)\log_2(T).
\end{align*}
\end{proof}
\begin{lemma}[Bound on sub-optimal pulls]\label{lem:suboptimal-pulls-strong-missp}
Under the same conditions as Theorem \ref{th:regret-strong-missp-nohls}, under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), for any $T\in\mathbb{N}$,
\begin{align*}
S_T = \sum_{t=1}^T \indi{a_t \neq \pi^\star(x_t)} \leq \frac{2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|) \log_2(T)}{\Delta} =: g_T(\Phi,\Delta, \mathrm{d}lta).
\end{align*}
\end{lemma}
\begin{proof}
Note that, since the minimum gap is at least $\Delta$, the event $\{a_t \neq \pi^\star(x_t)\}$ implies that $\Delta(x_t,a_t) \geq \Delta$. Then,
\begin{align*}
S_T \leq \sum_{t=1}^T \indi{\Delta(x_t,a_t) \geq \Delta} &\leq \sum_{t=1}^T \frac{\Delta(x_t,a_t)}{\Delta}
\\ &\leq \frac{2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|) \log_2(T)}{\Delta},
\end{align*}
where the last inequality holds by Theorem \ref{th:regret-strong-missp-nohls}.
\end{proof}
\subsection{Regret bound with HLS representations}
\begin{lemma}[Selecting the HLS representation]\label{lem:select-hls}
Suppose Algorithm \ref{alg:replearnin.icml.asm} is run with $\gamma=2$ and $\mathcal{L}_t(\phi) = -\lambda_{\min}(V_{t}(\phi) - \lambda I_{d_\phi})/L_\phi^2$. Suppose that there exists a unique $\phi^\star\in\Phi^\star$ such that $\phi^\star$ is HLS. Then, under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), $\phi_t = \phi^\star$ for all $t \geq \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$, where
\begin{align*}
\tau_{\mathrm{hls}} := \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \frac{2L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}\left(g_t(\Phi,\Delta, \mathrm{d}lta) + 8\sqrt{t\log\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}}\right) \right\}.
\end{align*}
\end{lemma}
\begin{proof}
Take any time $t\geq \tau_{\mathrm{elim}}$. By Lemma \ref{lem:elim-strong-missp}, we have $\Phi_t=\Phi^\star$ and, thus, $\phi^\star$ is the only active HLS representation. By the min-max theorem, $A\preceq B$ implies $\lambda_k(A)\le\lambda_k(B)$ where $\lambda_k$ is the $k$-th largest eigenvalue of the matrix. Then, from event $\mathcal{E}$, we have that, for all $t$,
\begin{align*}
\lambda_{\min}(V_{t}(\phi^\star) - \lambda I_{d_{\phi^\star}}) &\geq t\lambda^\star(\phi^\star) - L_{\phi^\star}^2 S_t - 8L_{\phi^\star}^2\sqrt{t\log(4d_{\phi^\star} |\Phi|t/\mathrm{d}lta)},\\
\lambda_{\min}(V_{t}(\phi) - \lambda I_{d_\phi}) &\leq L_\phi^2 S_t + 8L_\phi^2\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \quad \forall \phi\in\Phi^\star,\phi\neq\phi^\star.
\end{align*}
If $t=2^j$ for some $j\in\mathbb{N}$ (i.e., a time where representation selection is performed), $\phi^\star$ is selected if
\begin{align*}
\lambda_{\min}(V_{t}(\phi^\star) - \lambda I_{d_{\phi^\star}})/L_{\phi^\star}^2
> \max_{\phi\in\Phi^\star,\phi\neq\phi^\star} \lambda_{\min}(V_{t}(\phi) - \lambda I_{d_\phi})/L_\phi^2.
\end{align*}
A sufficient condition based on the bounds above is
\begin{align*}
t\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} > 2S_t + 8\sqrt{t\log(4d_{\phi^\star} |\Phi|t/\mathrm{d}lta)} + \max_{\phi\in\Phi^\star,\phi\neq\phi^\star} \left(8\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \right).
\end{align*}
This, in turns, yields the simpler sufficient condition
\begin{align*}
t\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} > 2S_t + 16\sqrt{t\log\left(\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}\right)}.
\end{align*}
Finally, using Lemma \ref{lem:suboptimal-pulls-strong-missp} to bound $S_t$, it is sufficient that
\begin{align*}
t\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} > 2g_t(\Phi,\Delta, \mathrm{d}lta) + 16\sqrt{t\log\left(\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}\right)}.
\end{align*}
The right-hand side is a sub-linear function of $t$. The proof is concluded by rearringing this inequality and defining the first update time that satisfies it.
\end{proof}
\begin{lemma}[Triggering the GLRT]\label{lem:trigger-glrt}
Suppose Algorithm \ref{alg:replearnin.icml.asm} is run with $\gamma=2$ and $\mathcal{L}_t(\phi) = -\lambda_{\min}(V_{t}(\phi) - \lambda I_{d_\phi})/L_\phi^2$. Suppose that there exists a unique $\phi^\star\in\Phi^\star$ such that $\phi^\star$ is HLS. Then, under the good event $\mathcal{E}$, the GLRT triggers for all for all $t \geq \tau_{\mathrm{glrt}} \vee \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$, where
\begin{align*}
\tau_{\mathrm{glrt}} := \min_{t \in \mathbb{N}} \left\{ t \mid t \geq \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}
\left(\frac{16\beta_{t,\mathrm{d}lta/|\Phi|}(\phi^\star)^2}{\Delta^2} + g_t(\Phi,\Delta, \mathrm{d}lta) + 8\sqrt{t\log(4d_{\phi^\star} |\Phi|t/\mathrm{d}lta)}\right) + 1 \right\}.
\end{align*}
\end{lemma}
\begin{proof}
From Lemma \ref{lem:select-hls}, we know that $\phi_t = \phi^\star$ for all $t \geq \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$. For simplicity, let us call $\phi := \phi^\star$.
Take any time step $t \geq \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$ (for which $\phi_t=\phi$), any $x\in\mathcal{X}$, and any $a\neq \pi^\star_t(x;\phi)$. Then, by the good event $\mathcal{E}$,
\begin{align*}
\|\phi(x, \pi^\star_t(x;\phi)) - \phi(s,a)\|_{V_{t}(\phi)^{-1}} &\leq \frac{2L_\phi}{\sqrt{\lambda_{\min}(V_{t}(\phi))}}
\\ &\leq \frac{2L_\phi}{\sqrt{t\lambda^\star(\phi) + \lambda - L_{\phi}^2 S_t - 8L_{\phi}^2\sqrt{t\log(4d_{\phi} |\Phi|t/\mathrm{d}lta)}}}.
\end{align*}
Similarly,
\begin{align*}
\big(\phi(x,\pi^\star_t(x;\phi)) - \phi(x,a)\big)^\mathsf{T}\theta_{\phi,t} &\geq \big(\phi(x,\pi^\star(x)) - \phi(x,a)\big)^\mathsf{T}\theta_{\phi,t}
\\ &= \Delta(x,a) + \big(\phi(x,\pi^\star(x)) - \phi(x,a)\big)^\mathsf{T}(\theta_{\phi,t}-\theta^\star_\phi)
\\ &\geq \Delta(x,a) - \| \phi(x,\pi^\star(x)) - \phi(x,a) \|_{V_t(\phi)^{-1}}\|\theta_{\phi,t}-\theta^\star_\phi\|_{V_t(\phi)}
\\ & \geq \Delta(x,a) - \frac{2L_\phi\beta_{t,\mathrm{d}lta/|\Phi|}(\phi)}{\sqrt{\lambda_{\min}(V_{t}(\phi))}}
\\ &\geq \Delta(x,a) - \frac{2L_\phi\beta_{t,\mathrm{d}lta/|\Phi|}(\phi)}{\sqrt{t\lambda^\star(\phi) + \lambda - L_{\phi}^2 S_t - 8L_{\phi}^2\sqrt{t\log(4d_{\phi} |\Phi|t/\mathrm{d}lta)}}}
\\ &\geq \Delta - \frac{2L_\phi\beta_{t,\mathrm{d}lta/|\Phi|}(\phi)}{\sqrt{t\lambda^\star(\phi) + \lambda - L_{\phi}^2 S_t - 8L_{\phi}^2\sqrt{t\log(4d_{\phi} |\Phi|t/\mathrm{d}lta)}}}.
\end{align*}
Now suppose $t$ is large enough so that the right-hand side is at least $\Delta/2$. Then, using the two inequalities above,
\begin{align*}
\mathrm{GLR}_t(x;\phi) &= \min_{a\neq \pi^\star_t(x;\phi)} \frac{\big(\phi(x,\pi^\star_t(x;\phi)) - \phi(x,a)\big)^\mathsf{T}\theta_{\phi,t}}{\|\phi(x, \pi^\star_t(x;\phi)) - \phi(s,a)\|_{V_{t}(\phi)^{-1}}}
\\ &\geq \frac{\Delta}{4L_\phi}\sqrt{t\lambda^\star(\phi) + \lambda - L_{\phi}^2 S_t - 8L_{\phi}^2\sqrt{t\log(4d_{\phi} |\Phi|t/\mathrm{d}lta)}}.
\end{align*}
Thus, a sufficient condition for the test trigger at time $t+1$ (recall that at time $t+1$ we perform the test with the statistics up to time $t$) is that the right-hand side above is larger than $\beta_{t,\mathrm{d}lta/|\Phi|}(\phi)$. Therefore, for the test to trigger forever, we need simultaneously that
\begin{align*}
\frac{\Delta}{4L_\phi}\sqrt{t\lambda^\star(\phi) + \lambda - L_{\phi}^2 S_t - 8L_{\phi}^2\sqrt{t\log(4d_{\phi} |\Phi|t/\mathrm{d}lta)}} \geq \beta_{t,\mathrm{d}lta/|\Phi|}(\phi)
\end{align*}
and that $t \geq \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$. Note that this condition implies that the empirical gap is at least $\Delta/2$ as we required above. Using Lemma \ref{lem:suboptimal-pulls-strong-missp} to bound $S_t$ and rearranging concludes the proof.
\end{proof}
\begin{theorem}[Regret bound with HLS representation]\label{th:regret-strong-missp-hls}
Suppose Algorithm \ref{alg:replearnin.icml.asm} is run with $\gamma=2$ and $\mathcal{L}_t(\phi) = -\lambda_{\min}(V_{t}(\phi) - \lambda I_{d_\phi})/L_\phi^2$. Suppose $\phi^\star$ is the unique HLS representation in $\Phi^\star$. Under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), for any $T\in\mathbb{N}$,
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(\tau - \tau_{\mathrm{elim}}, \phi, \mathrm{d}lta_{\log_2(\tau)}/|\Phi|) \log_2(\tau),
\end{align*}
where $\tau := \tau_{\mathrm{glrt}} \vee \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$.
\end{theorem}
\begin{proof}
Under $\mathcal{E}$, Lemma \ref{lem:trigger-glrt} ensures that the GLRT triggers for $t \geq \tau_{\mathrm{glrt}} \vee \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$ with a realizable representation and, thus, the regret is zero for those times. Then, the result follows by using Theorem \ref{th:regret-strong-missp-nohls} to bound the regret up to time $\tau_{\mathrm{glrt}} \vee \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$.
\end{proof}
\subsection{Finding explicit bounds}
\begin{lemma}\label{lem:ineq-log-sqrt}
For $x\in\mathbb{R}$ and $c_1,c_2,c_3,c_4 \geq 0$, consider the inequality $x \leq c_1 + c_2\sqrt{x} + c_3\sqrt{x\log(x)} + c_4\log(x)$. Then, $x \lesssim c_1 + c_2^2 + c_3^2 + c_4$, where the $\lesssim$ notation hides constant and logarithmic terms.
\end{lemma}
\begin{proof}
We can start by finding a crude bound on $x$ by using the inequality $\log(x) \leq x^{\alpha}/\alpha$ for any $x,\alpha \geq 0$. Using it for $\alpha = 1/2$, we obtain
\begin{align*}
x \leq c_1 + c_2\sqrt{x} + \sqrt{2} c_3 x^{3/4} + 2c_4\sqrt{x}.
\end{align*}
Suppose that $x \geq 1$. Then, $x \leq (c_1 + c_2 + \sqrt{2} c_3 + 2c_4) x^{3/4}$, which implies that $x \leq (c_1 + c_2 + \sqrt{2} c_3 + 2c_4)^4$. Therefore, we have $x\leq C$ for $C := \max\{(c_1 + c_2 + \sqrt{2} c_3 + 2c_4)^4, 1\}$. Plugging this into the logarithms in our initial inequality,
\begin{align*}
x \leq c_1 + (c_2 + c_3\sqrt{\log(C)})\sqrt{x} + c_4\log(C).
\end{align*}
Solving this second-order inequality in $\sqrt{x}$ and using $(a+b)^2 \leq 2a^2 + 2b^2$, we obtain
\begin{align*}
x &\leq \left( \frac{c_2 + c_3\sqrt{\log(C)}}{2} + \sqrt{\frac{(c_2 + c_3\sqrt{\log(C)})^2}{4} + c_1 + c_4\log(C)} \right)^2
\\ &\leq (c_2 + c_3\sqrt{\log(C)})^2 + 2c_1 + 2c_4\log(C) \lesssim c_1 + c_2^2 + c_3^2 + c_4.
\end{align*}
\end{proof}
\begin{lemma}\label{lem:scale-tau-elim}
The elimination time $\tau_{\mathrm{elim}}$ defined in Lemma \ref{lem:elim-strong-missp} satisfies
\begin{align*}
\tau_{\mathrm{elim}} \lesssim \frac{d\log(|\Phi|/\mathrm{d}lta)}{\min_{\phi\notin\Phi^\star}\epsilon_\phi}.
\end{align*}
\end{lemma}
\begin{proof}
We know that $\tau_{\mathrm{elim}} = 2^j$ for some specific $j$. Let $t=2^{j-1}$ be the time at which the last update before $\tau_{\mathrm{elim}}$ was performed. By definition, we have that
\begin{align*}
t &\leq \max_{\phi\notin\Phi^\star}\frac{1}{\epsilon_\phi}\left( D_t(\phi) + \min_{\phi^\star\in\Phi^\star} D_t(\phi^\star) + 328\log\frac{8|\Phi|^2t^3}{\mathrm{d}lta} \right)
\\ &\leq \frac{320d\log(12BL) + 320d\log(t) + 328d\log(8|\Phi|^2/\mathrm{d}lta) + 984\log(t)}{\min_{\phi\notin\Phi^\star}\epsilon_\phi},
\end{align*}
where we used some simple crude bounds in the second inequality. Then, by Lemma \ref{lem:ineq-log-sqrt}, $t \lesssim \frac{d\log(|\Phi|/\mathrm{d}lta)}{\min_{\phi\notin\Phi^\star}\epsilon_\phi}$ and the same holds for $\tau_{\mathrm{elim}}$ since $\tau_{\mathrm{elim}} = 2t$.
\end{proof}
\begin{lemma}\label{lem:scale-tau-hls}
The time $\tau_{\mathrm{hls}}$ defined in Lemma \ref{lem:select-hls} satisfies
\begin{align*}
\tau_{\mathrm{hls}} \lesssim \tau_{\mathrm{alg}} + \frac{L_{\phi^\star}^4\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)^2} + \frac{\tau_{\mathrm{elim}}L_{\phi^\star}^2}{\lambda^\star(\phi^\star)\Delta},
\end{align*}
where
\begin{align*}
\tau_{\mathrm{alg}} := \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \frac{8L_{\phi^\star}^2\log_2(t)}{\lambda^\star(\phi^\star)\Delta}\max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(t, \phi, \mathrm{d}lta_{\log_2(t)}/|\Phi|)\right\}.
\end{align*}
\end{lemma}
\begin{proof}
By definition of $\tau_{\mathrm{hls}}$,
\begin{align*}
\tau_{\mathrm{hls}} \leq \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > 2\max\left(\frac{2L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}g_t(\Phi,\Delta, \mathrm{d}lta), \frac{16L_{\phi^\star}^2}{\lambda^\star(\phi^\star)} \sqrt{t\log\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}}\right) \right\}.
\end{align*}
Thus, $\tau_{\mathrm{hls}} \lesssim \tau_{\mathrm{hls}}' + \tau_{\mathrm{hls}}''$, where
\begin{align*}
\tau_{\mathrm{hls}}' &:= \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \frac{4L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}g_t(\Phi,\Delta, \mathrm{d}lta)\right\},\\
\tau_{\mathrm{hls}}'' &:= \min_{t \in \mathbb{N}} \left\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \frac{32L_{\phi^\star}^2}{\lambda^\star(\phi^\star)} \sqrt{t\log\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}}\right\}.
\end{align*}
We now bound $\tau_{\mathrm{hls}}''$. We know that $\tau_{\mathrm{hls}}'' = 2^j$ for some specific $j$. Let $t=2^{j-1}$ be the time at which the last update before $\tau_{\mathrm{hls}}''$ was performed. By definition, we have that
\begin{align*}
t &\leq \frac{32L_{\phi^\star}^2}{\lambda^\star(\phi^\star)} \sqrt{t\log\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}}
\leq \frac{32L_{\phi^\star}^2}{\lambda^\star(\phi^\star)} \left(\sqrt{t\log\frac{4 |\Phi| d}{\mathrm{d}lta}} + \sqrt{t\log(t)}\right)
\lesssim \frac{L_{\phi^\star}^4\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)^2},
\end{align*}
where we used Lemma \ref{lem:ineq-log-sqrt}. The same holds for $\tau_{\mathrm{hls}}''$ since $\tau_{\mathrm{hls}}'' = 2t$. We can now apply the same trick to $\tau_{\mathrm{hls}}'$ by expanding the definition of $g_t(\Phi,\Delta, \mathrm{d}lta)$. This yields
\begin{align*}
\tau_{\mathrm{hls}}' \lesssim \tau_{\mathrm{alg}} + \frac{\tau_{\mathrm{elim}}L_{\phi^\star}^2}{\lambda^\star(\phi^\star)\Delta}.
\end{align*}
\end{proof}
\begin{lemma}\label{lem:scale-tau-glrt}
The time $\tau_{\mathrm{glrt}}$ defined in Lemma \ref{lem:trigger-glrt} satisfies
\begin{align*}
\tau_{\mathrm{glrt}} \lesssim \tau_{\mathrm{alg}} + \frac{L_{\phi^\star}^4\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)^2} + \frac{\tau_{\mathrm{elim}}L_{\phi^\star}^2}{\lambda^\star(\phi^\star)\Delta} + \frac{L_{\phi^\star}^2 d_{\phi^\star}\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)\Delta^2},
\end{align*}
where $ \tau_{\mathrm{alg}}$ is defined in Lemma \ref{lem:scale-tau-hls}.
\end{lemma}
\begin{proof}
As we did in the proof of Lemma \ref{lem:scale-tau-hls}, we can bound $\tau_{\mathrm{glrt}} \lesssim \tau_{\mathrm{glrt}}' + \tau_{\mathrm{glrt}}'' + \tau_{\mathrm{glrt}}'''$, where
\begin{align*}
\tau_{\mathrm{glrt}}' &:= \min_{t \in \mathbb{N}} \left\{ t \mid t \geq \frac{L_{\phi^\star}^2\beta_{t,\mathrm{d}lta/|\Phi|}(\phi^\star)^2}{\lambda^\star(\phi^\star)\Delta^2} \right\},
\\ \tau_{\mathrm{glrt}}'' &:= \min_{t \in \mathbb{N}} \left\{ t \mid t \geq \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}
g_t(\Phi,\Delta, \mathrm{d}lta) \right\},
\\ \tau_{\mathrm{glrt}}''' &:= \min_{t \in \mathbb{N}} \left\{ t \mid t \geq \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}
\sqrt{t\log(4d_{\phi^\star} |\Phi|t/\mathrm{d}lta)} \right\}.
\end{align*}
As before, we have
\begin{align*}
\tau_{\mathrm{glrt}}'' \lesssim \tau_{\mathrm{alg}} + \frac{\tau_{\mathrm{elim}}L_{\phi^\star}^2}{\lambda^\star(\phi^\star)\Delta} \quad \text{and} \quad \tau_{\mathrm{glrt}}''' \lesssim \frac{L_{\phi^\star}^4\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)^2}.
\end{align*}
Regarding the first term, since $\beta_{t,\mathrm{d}lta/|\Phi|}(\phi^\star)^2$ is of order $d_{\phi^\star}\log(t|\Phi|/\mathrm{d}lta)$, by Lemma \ref{lem:ineq-log-sqrt},
\begin{align*}
\tau_{\mathrm{glrt}}' \lesssim \frac{L_{\phi^\star}^2 d_{\phi^\star}\log(|\Phi|/\mathrm{d}lta)}{\lambda^\star(\phi^\star)\Delta^2}.
\end{align*}
\end{proof}
\subsection{Proof of the main theorems}
The proof of Theorem \ref{th:icmlams.regret.lambda_min.hls} easily follows by using Lemma \ref{lem:scale-tau-elim}, \ref{lem:scale-tau-hls}, \ref{lem:scale-tau-glrt} to simplify the expressions of the constant times in Theorem \ref{th:regret-strong-missp-hls}.
Corollary \ref{cor:single-repr} can be proved analogously to Theorem \ref{th:regret-strong-missp-nohls} and \ref{th:regret-strong-missp-hls} while noting that, since $|\Phi|=1$, the base algorithm is never reset (hence we can simply use confidence $\mathrm{d}lta$ and remove the extra $\log_2(T)$ term) and $\tau_{\mathrm{elim}} = \tau_{\mathrm{hls}} = 0$.
Corollary \ref{th:icmlams.regret.lambda_min.nohls} is simply a restatement of Theorem \ref{th:regret-strong-missp-nohls}.
\section{Variants of \textsc{BanditSRL}\xspace}\label{app:algo.variations}
\subsection{\textsc{BanditSRL}\xspace: alternative losses}
\subsubsection{Obtaining best-in-class regret}
Suppose that the upper bound $\wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta)$ to the regret of the base algorithm contains only known quantities (e.g., it could be a worst-case regret bound). Moreover, assume that the minimum gap $\Delta$ is known. This is only to simplify the notation in what follows, as we shall see at the end of this section that $\Delta$ can be estimated with a decreasing schedule without significantly altering the results. We consider the following alternative representation selection loss. For $j\in\mathbb{N}$,
\begin{align*}
\mathcal{L}_{\mathrm{bic},{t_j}}(\phi) = \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_j/|\Phi|) - \left[ \frac{\lambda_{\min}(V_{t_j}(\phi) - \lambda I_{d_\phi})}{L_\phi^2} - g_{t_j}(\Phi,\Delta, \mathrm{d}lta) - 8\sqrt{t_j\log(4d_\phi |\Phi|t_j/\mathrm{d}lta)} \right]_+
\end{align*}
where $[x]_+ := \max(x, 0)$. We show that with this selection loss we can achieve the best-in-class regret bound when no HLS realizable representation exists while preserving the constant-regret result when such a representation does exist.
\begin{theorem}\label{th:regret-nohls-bic}
Suppose that $\Phi^\star$ does not contain any HLS representation. Under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), for any $T\in\mathbb{N}$, the regret of Algorithm \ref{alg:replearnin.icml.asm} with $\gamma=2$ and loss $\mathcal{L}_{\mathrm{bic},t}(\phi)$ can be bounded as
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \min_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|) \log_2(T),
\end{align*}
where $\tau_{\mathrm{elim}}$ is defined in Lemma \ref{lem:elim-strong-missp}
\end{theorem}
\begin{proof}
Using exactly the same steps as in the proof of Theorem \ref{th:regret-strong-missp-nohls}, we have
\begin{align*}
R_T \leq 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \wb{R}_{\mathfrak{A}}(N_j(t_{j+1} \wedge T), \phi_{t_j}, \mathrm{d}lta_j/|\Phi|),
\end{align*}
where we recall that $\bar{j}$ is such that $\tau_{\mathrm{elim}} = 2^{\bar{j}}$. Note that $N_j(t_{j+1} \wedge T) \leq t_{j+1} - t_j = t_j$. Moreover, under $\mathcal{E}$, for all $j\geq\bar{j}$, we have that $\Phi_{t_j} = \Phi^\star$ and, since $\Phi^\star$ does not contain any HLS representation,
\begin{align*}
\frac{\lambda_{\min}(V_{t_j}(\phi) - \lambda I_{d_\phi})}{L_\phi^2} - g_{t_j}(\Phi,\Delta, \mathrm{d}lta) - 8\sqrt{t_j\log(4d_\phi |\Phi|t_j/\mathrm{d}lta)} \leq 0.
\end{align*}
This implies that $\mathcal{L}_{\mathrm{bic},{t_j}}(\phi) = \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_j/|\Phi|)$ in such phases. Therefore,
\begin{align*}
R_T &\leq 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \wb{R}_{\mathfrak{A}}(t_j, \phi_{t_j}, \mathrm{d}lta_j/|\Phi|)
= 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \mathcal{L}_{\mathrm{bic},{t_j}}(\phi_{t_j})
\\ &= 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \min_{\phi\in\Phi^\star} \mathcal{L}_{\mathrm{bic},{t_j}}(\phi)
= 2\tau_{\mathrm{elim}} + \sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \min_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_j/|\Phi|).
\end{align*}
The proof is concluded by noting that $\mathrm{d}lta_j \geq \mathrm{d}lta_{\log_2(T)}$ and $t_j \leq T$, so that, by the properties $\wb{R}_{\mathfrak{A}}$, $\sum_{j=\bar{j}}^{\lfloor\log_2(T)\rfloor} \min_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_j/|\Phi|) \leq \min_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(T, \phi, \mathrm{d}lta_{\log_2(T)}/|\Phi|)\log_2(T)$.
\end{proof}
Let us now derive the constant regret bound when a HLS representation exists. Note that, since we only changed the selection loss, Theorem \ref{th:regret-strong-missp-nohls} and Lemma \ref{lem:suboptimal-pulls-strong-missp} still hold. The only change is in the time $\tau_{\mathrm{hls}}$ at which the HLS representation is selected. Theorem \ref{th:regret-strong-missp-hls} also continues to hold with the following redefinition of such time.
\begin{lemma}[Selecting the HLS representation with BIC loss]\label{lem:select-hls-bic}
Suppose Algorithm \ref{alg:replearnin.icml.asm} is run with $\gamma=2$ and $\mathcal{L}_t(\phi) = \mathcal{L}_{\mathrm{bic},t}(\phi)$. Suppose that there exists a unique $\phi^\star\in\Phi^\star$ such that $\phi^\star$ is HLS. Then, under event $\mathcal{E}$ (i.e., with probability at least $1-4\mathrm{d}lta$), $\phi_t = \phi^\star$ for all $t \geq \tau_{\mathrm{hls}} \vee \tau_{\mathrm{elim}}$, where
\begin{align*}
\tau_{\mathrm{hls}} := \min_{t \in \mathbb{N}} \Bigg\{ t \mid \exists j\in\mathbb{N}_{>0} : t=2^j, t > \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)}&\Bigg( \wb{R}_{\mathfrak{A}}(t, \phi^\star, \mathrm{d}lta_{\log_2(t)}/|\Phi|)
\\ & + g_t(\Phi,\Delta, \mathrm{d}lta) + 8\sqrt{t\log\frac{4 |\Phi|t \max_{\phi\in\Phi^\star} d_{\phi}}{\mathrm{d}lta}} \Bigg) \Bigg\}.
\end{align*}
\end{lemma}
\begin{proof}
Take any time $t_j\geq \tau_{\mathrm{elim}}$. By Lemma \ref{lem:elim-strong-missp}, we have $\Phi_{t_j}=\Phi^\star$ and, thus, $\phi^\star$ is the only active HLS representation. Using the good event $\mathcal{E}$, we can easily see that $\mathcal{L}_{\mathrm{bic},{t_j}}(\phi) \leq \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_j/|\Phi|)$ for all $\phi\in\Phi^\star, \phi\neq\phi^\star$. Moreover,
\begin{align*}
\frac{\lambda_{\min}(V_{t_j}(\phi) - \lambda I_{d_\phi})}{L_\phi^2} \geq t_j\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} - g_{t_j}(\Phi,\Delta, \mathrm{d}lta) - 8\sqrt{t_j\log(4d_{\phi^\star} |\Phi|t_j/\mathrm{d}lta)}
\end{align*}
and, thus,
\begin{align*}
\mathcal{L}_{\mathrm{bic},{t_j}}(\phi^\star) \geq \wb{R}_{\mathfrak{A}}(t_j, \phi^\star, \mathrm{d}lta_j/|\Phi|) - \left[ t_j\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} - 2g_{t_j}(\Phi,\Delta, \mathrm{d}lta) - 16\sqrt{t_j\log \frac{4|\Phi|t_j \max_{\phi\in\Phi^\star}d_\phi}{\mathrm{d}lta}} \right]_+
\end{align*}
Therefore, a sufficient condition for selecting $\phi^\star$ is
\begin{align*}
t_j\frac{\lambda^\star(\phi^\star)}{L_{\phi^\star}^2} - 2g_{t_j}(\Phi,\Delta, \mathrm{d}lta) - 16\sqrt{t_j\log \frac{4|\Phi|t_j \max_{\phi\in\Phi^\star}d_\phi}{\mathrm{d}lta}} > \wb{R}_{\mathfrak{A}}(t_j, \phi^\star, \mathrm{d}lta_j/|\Phi|).
\end{align*}
The proof is concluded by rearringing this inequality.
\end{proof}
\paragraph{Dealing with unknown $\Delta$}
If the minimum gap $\Delta$ is unknown, it can be easily guessed by a decreasing schedule $(1/t^\ell)_{t\geq 1}$. Then, we can replace the unknown term $g_{t_j}(\Phi, \Delta, \mathrm{d}lta)$ in $\mathcal{L}_{\mathrm{bic},{t_j}}(\phi)$ with $g_{t_j}(\Phi,1/t_j^\ell, \mathrm{d}lta)$. Since
\begin{align*}
g_{t_j}(\Phi,1/t_j^\ell, \mathrm{d}lta) = 2t_j^\ell\tau_{\mathrm{elim}} + t_j^\ell \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_{\log_2(t_j)}/|\Phi|) \log_2(t_j),
\end{align*}
we only need $t_j^\ell \max_{\phi\in\Phi^\star} \wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_{\log_2(t_j)}/|\Phi|)$ to be sub-linear to derive our constant-regret result. For instance, if $\wb{R}_{\mathfrak{A}}(t_j, \phi, \mathrm{d}lta_{\log_2(t_j)}/|\Phi|)$ is an $\widetilde{O}(\sqrt{t_j})$ regret bound, we can set $\ell = 1/4$. Then, the proofs of the two results above are the same except that we add a linear regret term $1/\Delta^{1/\ell}$ for the first time steps where $1/t^\ell > \Delta$.
\subsubsection{Weak-\textsc{HLS}\xspace Loss}\label{app:weak}
In Section~\ref{sec:exp.and.practical.algo}, we introduced an alternative loss $\mathcal{L}_{\mathrm{weak},t}(\phi) = -\min_{s\leq t} \big\{\phi(x_s,a_s)^\mathsf{T} (V_t(\phi) - \lambda I_{d_{\phi}}) \phi(x_s,a_s) / L_{\phi}^2 \big\}$, which is motivated by the notion of ``weak-\textsc{HLS}\xspace'' representations from~\citep{PapiniTRLP21hlscontextual} and appears to perform well in practice. In this section, \textbf{we will consider a slight variant}
\[
\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi) = -\min_{s\leq t} \big\{\phi(x_s,a_s)^\mathsf{T} (V_t(\phi) - \lambda I_{d_{\phi}}) \phi(x_s,a_s) / \norm{\phi(x_s,a_s)}^2 \big\}
\]
where the features are normalized to have norm equal to one. The loss used in the experiments is $\mathcal{L}_{\mathrm{weak},t}$ as defined in the main text.
We will show that $\overline{\mathcal{L}}_{\mathrm{weak},t}$ does indeed select weak-\textsc{HLS}\xspace representations. We will assume throughout this section that both $\mathcal{X}$ and $\mathcal{A}$ are finite and $\mathrm{supp}(\rho)=\mathcal{X}$. Let us first recall the definition of weak \textsc{HLS}\xspace. We abbreviate $\mathrm{span}(\phi) = \mathrm{span}\{\phi(x,a)\mid x\in\mathcal{X},a\in\mathcal{A}\}$ and $\mathrm{span}(\phi^\star) = \mathrm{span}\{\phi(x,a^\star_x)\mid x\in\mathcal{X}\}$.
\begin{definition}[Weak-\textsc{HLS}\xspace Representation]
A representation $\phi$ is weak-\textsc{HLS}\xspace if $\mathrm{span}(\phi^\star) = \mathrm{span}(\phi)$.
\end{definition}
The following characterization of the weak \textsc{HLS}\xspace property will be useful. We abbreviate $M_\phi^\star = \EV_{x\sim\rho}\left[\phi(x,a_x^\star)\phi(x,a_x^\star)^\mathsf{T}\right]$.
\begin{lemma}\label{lem:weak_hls_char}
A representation $\phi$ is weak-\textsc{HLS}\xspace if and only if
\begin{equation}\label{eq:weak_hls_char}
\min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\} > 0.
\end{equation}
\end{lemma}
\begin{proof}
We denote by $\mathrm{Im}(A)$ the column space of a symmetric matrix $A$, and by $\mathrm{ker}(A)$ its kernel. Under our assumption that $\rho$ is full-support, it is easy to see that $\mathrm{span}(\phi^\star)=\mathrm{Im}(M_\phi^\star)$.
If $\phi$ is weak-\textsc{HLS}\xspace, then
\begin{align}
\min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\} &\ge \min_{v\in\mathrm{span}(\phi),\norm{v}=1}\left\{v^\mathsf{T} M_\phi^\star v\right\} \\
&=\min_{v\in\mathrm{span}(\phi^\star),\norm{v}=1}\left\{v^\mathsf{T} M_\phi^\star v\right\} \\
&=\min_{v\in\mathrm{Im}(M_\phi^\star),\norm{v}=1}\left\{v^\mathsf{T} M_\phi^\star v\right\},
\end{align}
and the latter is positive since it is the definition of the minimum \emph{nonzero} eigenvalue of a positive semidefinite matrix.
Now assume~\eqref{eq:weak_hls_char} holds. We just need to show $\mathrm{span}(\phi)\subseteq\mathrm{span}(\phi^\star)$, since the other inclusion is trivial.
By diagonalization, it is easy to show that the solution space of $\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)=0$ is $\ker(M_\phi^\star)$. Hence,~\eqref{eq:weak_hls_char} implies $\phi(x,a)\in\mathrm{Im}(M_\phi^\star)=\mathrm{span}(\phi^\star)$ for all $x\in\mathcal{X}$ and $a\in\mathcal{A}$. In turn, this implies $\mathrm{span}(\phi)\subseteq\mathrm{span}(\phi^\star)$, concluding the proof.
\end{proof}
We can now show that our alternative loss does indeed select weak-\textsc{HLS}\xspace representations.
\begin{lemma}
Assume $\rho_{\min}>0$ is the minimum probability $\rho$ assigns to any context, and $K=|\mathcal{A}|$.
For any representation $\phi$, $\epsilon$-greedy with $\epsilon_t=t^{-1/3}$ guarantees that the following hold simultaneously with probability $1-5\mathrm{d}lta$ for all $t\ge\left(\frac{K}{\rho_{\min}}\log\frac{1}{\mathrm{d}lta}\right)^{3/2}$:
\begin{align}
&\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi) \le - t \min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\} + o(t) &&\text{and}\label{eq:weak_lower}\\
&\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi) \ge - t \min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\} - o(t)\label{eq:weak_upper}
\end{align}
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:good-event-proba}, the good event $\mathcal{E}$ holds with probability $1-4\mathrm{d}lta$. By $\mathcal{E}_2$, since Loewner ordering induces the same ordering on all quadratic forms:
\begin{align}
\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi)
&= -\min_{s\leq t} \left\{\frac{\phi(x_s,a_s)^\mathsf{T} (V_t(\phi) - \lambda I_{d_{\phi}}) \phi(x_s,a_s)}{\norm{\phi(x_s,a_s)}^2}\right\} \\
&\le - \min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} (V_t(\phi) - \lambda I_{d_{\phi}})\phi(x,a)}{\norm{\phi(x,a)}^2}\right\}\\
&\le - t\min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\} + o(t),
\end{align}
where we have also used Lemma~\ref{lem:suboptimal-pulls-strong-missp} to bound the number of suboptimal pulls.
Similarly, by $\mathcal{E}_3$:
\begin{align}
\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi)
&\ge -\min_{s\leq t} \left\{\frac{\phi(x_s,a_s)^\mathsf{T} M_\phi^\star \phi(x_s,a_s)}{\norm{\phi(x_s,a_s)}^2}\right\} - o(t).
\end{align}
Let $(\overline{x},\overline{a})\in\arg\min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\}$. Under our assumption, $\epsilon$-greedy selects each context-action pair with probability at least $q=\rho_{\min}/(Kt^{1/3})$. After $t$ rounds, the probability that it has not yet selected $(\overline{x},\overline{a})$ is at most $(1-q)^t$. A simple calculation shows that, by $t\ge \left(\frac{K}{\rho_{\min}}\log\frac{1}{\mathrm{d}lta}\right)^{3/2}$, the algorithm has selected $(\overline{x},\overline{a})$ at least once with probability $1-\mathrm{d}lta$, hence
\begin{equation}
\min_{s\leq t} \left\{\frac{\phi(x_s,a_s)^\mathsf{T} M_\phi^\star \phi(x_s,a_s)}{\norm{\phi(x_s,a_s)}^2}\right\} = \frac{\phi(\overline{x},\overline{a})^\mathsf{T} M_\phi^\star\phi(\overline{x},\overline{a})}{\norm{\phi(\overline{x},\overline{a})}^2} = \min_{x\in\mathcal{X},a\in\mathcal{A}}\left\{\frac{\phi(x,a)^\mathsf{T} M_\phi^\star\phi(x,a)}{\norm{\phi(x,a)}^2}\right\}.
\end{equation}
A union bound concludes the proof with an overall probability of $1-5\mathrm{d}lta$.
\end{proof}
Now let $\phi_1$ be a weak-\textsc{HLS}\xspace representation. Lemma~\ref{lem:weak_hls_char} and Equation~\ref{eq:weak_lower} show that, with high probability, $\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi_1) \le -t\wt{\lambda} + o(t)$ for some constant $\wt{\lambda}>0$. From the proof of Lemma~\ref{lem:weak_hls_char} we can deduce that this $\wt{\lambda}$ is the minimum nonzero eigenvalue\footnote{Of course, an \textsc{HLS}\xspace representation is also weak-\textsc{HLS}\xspace, and $\wt{\lambda}=\lambda
^\star>0$. The converse is not true. Note also that the minimum nonzero eigenvalue $\wt{\lambda}$ is well-defined and positive for \emph{all} representations, but it can only play the role of $\lambda^\star$ when the representation is weak-\textsc{HLS}\xspace.} of $M_{\phi_1}^\star$. On the other hand, consider a representation $\phi_2$ that does not have the weak-\textsc{HLS}\xspace property. The other direction of Lemma~\ref{lem:weak_hls_char} and Equation~\ref{eq:weak_upper} show that, with high probability, $\overline{\mathcal{L}}_{\mathrm{weak},t}(\phi_1) \ge -o(t)$. Hence, the loss for the weak-\textsc{HLS}\xspace representations decreases (towards $-\infty$) much faster than representations that do not have this property. This justifies the use of $\overline{\mathcal{L}}$ as a loss in the \textsc{BanditSRL}\xspace algorithm, when $\epsilon$-greedy is used as a base algorithm. A more sophisticated argument allows to extend this result to any no-regret algorithm, by using the fact that they eventually sample all (finite) state-action pairs to ensure sufficient exploration.
When $\mathrm{span}(\phi) = \mathbb{R}^d$, there is no distinction between \textsc{HLS}\xspace and weak-\textsc{HLS}\xspace. Moreover,~\cite{PapiniTRLP21hlscontextual} show that weak-\textsc{HLS}\xspace is enough for \textsc{LinUCB}\xspace to achieve constant regret. We could generalize the constant-regret result from this paper to weak-\textsc{HLS}\xspace in a similar fashion.
\paragraph{Empirical evaluation.} We empirically compare $\overline{\mathcal{L}}_{\mathrm{weak},t}$ and ${\mathcal{L}}_{\mathrm{weak},t}$ on the same set of experiments reported in the main article. Fig.~\ref{fig:vardim.appendix} shows that the loss ${\mathcal{L}}_{\mathrm{weak},t}$ outperforms the theoretically grounded $\overline{\mathcal{L}}_{\mathrm{weak},t}$ loss. We leave as open question whether the loss $\mathcal{L}_{\mathrm{weak},t}$ is theoretically sound or not.
\begin{figure}
\caption{\small
Varying dimension experiment with all realizable representations (left), misspecified representations (center-left), realizable non-\textsc{HLS}
\label{fig:vardim.appendix}
\end{figure}
\subsection{\mathrm{d}epalgo: representation learning through neural networks}
\begin{algorithm}[t]
\caption{\mathrm{d}epalgo}\label{alg:deep.algo}
\begin{algorithmic}[1]
\STATE \textbf{Input:} Neural network $f$ with last layer $\phi : \mathcal{X} \times \mathcal{A} \to \mathbb{R}^d$, no-regret algorithm $\mathfrak{A}$, confidence $\mathrm{d}lta \in (0,1)$, update schedule $\gamma > 1$, regularizer $\lambda > 0$ and $c_{\mathrm{reg}} >0$
\STATE Initialize $j=0$, $f_j$ arbitrarily, $b_{t}(\phi_j) = 0$, $V_0(\phi_j) = \lambda I$
\FOR{$t = 1, \ldots$}
\STATE Observe context $x_t$
\IF{$\mathrm{GLR_{t-1}(x_t;\phi_j)} > \beta_{t-1,\mathrm{d}lta/|\Phi|}(\phi_{j})$}
\STATE Play $a_t = \operatornamewithlimits{argmax}_{a\in\mathcal{A}} \big\{ \phi_{j}(x_t,a)^\mathsf{T} \theta_{\phi_j,t-1} \big\}$ and observe reward $y_t$
\STATE $\mathcal{D}_{\mathrm{glrt},t} = \mathcal{D}_{\mathrm{glrt},t-1} \cup \{x_t,a_t,y_t\}$
\ELSE
\STATE Play $a_t = \mathfrak{A}_t(x_t;\phi_{j},\mathrm{d}lta)$, observe reward $y_t$, and feed it into $\mathfrak{A}$
\STATE $\mathcal{D}_{\mathfrak{A},t} = \mathcal{D}_{\mathfrak{A},t-1} \cup \{x_t,a_t,y_t\}$
\ENDIF
\STATE Let $\mathcal{D}_t = \mathcal{D}_{\mathfrak{A},t} \cup \mathcal{D}_{\mathrm{glrt},t}$
\STATE Compute $V_t(\phi_j) = V_t(\phi_j) + \phi_j(x_t,a_t)\phi(x_t,a_t)^\mathsf{T}$, $b_{t}(\phi_j) = b_t(\phi_j) + \phi_j(x_t,a_t) y_t$ and $\theta_{\phi_j,t} = V_t(\phi_j)^{-1} b_t(\phi_j)$
\IF{$t = \lceil \gamma t_j \rceil$}
\STATE Set $j = j +1$ and $t_j = t$
\STATE Compute $\phi_{j} = \operatornamewithlimits{argmin}_{\phi}\min_f \big\{ \mathcal{L}_t(\phi) + c_{\mathrm{reg}} \wb{E}_t(f) \big\}$ (see Eq.~\ref{eq:opt.unconstrained.appendix}) and reset $\mathfrak{A}$ \label{line:deep.reglos}
\STATE Recompute least-square on the linear embedding $\phi_j$ using all samples
\begin{align*}
V_{t}(\phi_j) = \lambda I + \sum_{x,a,y \in \mathcal{D}_{t}} \phi_j(x,a)\phi_j(x,a)^\mathsf{T},
\quad
b_t(\phi_j) = \sum_{x,a,y \in \mathcal{D}_{t}} \phi_j(x,a) y
\end{align*}
and $\theta_{\phi_j,t} = V_t(\phi_j)^{-1} b_t(\phi_j)$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We recall that we consider a representation space $\Phi$ defined by the last layer of a Neural Network (NN). We denote by $\phi : \mathcal{X} \times \mathcal{A} \to \mathbb{R}^d$ the last layer and by $f(x,a) = \phi(x,a)^\mathsf{T} \omega$ the full NN, where $\omega$ are the last-layer weights. We report the pseudo code of \mathrm{d}epalgo{} in Alg.~\ref{alg:deep.algo}. The structure of \mathrm{d}epalgo{} is identical to the one of \textsc{BanditSRL}\xspace{}, showing the generality and flexibility of the theoretical algorithm.
The GLRT is the same reported in Eq.~\ref{eq:glrt.main.paper}. It leverages the current representation $\phi_j$ learnt by the NN and the regularized least squares parameters $V_{t}(\phi_j)$ and $\theta_{\phi_j,t}$. Note that, similarly to~\citep{xu2020neuralcb}, we keep a separate estimate of the weights of the linear fitting ($\theta$ vs. $\omega$). While the NN weights $\omega$ are learnt through the regularization loss (line~\ref{line:deep.reglos} in Alg.~\ref{alg:deep.algo}), we compute $\theta_{\phi_j,t} = \operatornamewithlimits{argmin}_{\theta} \Big\{\frac{1}{t} \sum_{k=1}^t (\phi_{j_t}(x_t,a_t)^\mathsf{T} \theta - y_t)^2 + \lambda \|\theta\|_2^2\Big\}$ by RLS at each time $t$. This allows us to compute the best linear fit at each time $t$ using efficient incremental updates (e.g., we can use Sherman-Morrison formula for computing directly $V_t(\phi_j)^{-1}$) and avoid to retrain the network after observing a new sample $(x_t,a_t,y_t)$.
An alternative approach is to train only the NN weights $\omega$ (i.e., keeping fix the representation $\phi$) by stochastic gradient at each step, leading to an approximation of the RLS solution.
The phases scheme of \textsc{BanditSRL}\xspace pairs very well with NN since it allows to perform the computationally costly operation of full NN training only $\log_\gamma(T)$ times. The NN is trained through a regression problem with an auxiliary representation loss promoting \textsc{HLS}\xspace-like representations. At the beginning of phase $j$, we solve the following problem
\begin{equation}\label{eq:opt.unconstrained.appendix}
\begin{aligned}
f_,\phi_j
&= \operatornamewithlimits{argmin}_{\phi,f} \left\{ \mathcal{L}_t(\phi) + c_{\mathrm{reg}}\, \wb{E}_{t}(f) \right\}\\
&= \operatornamewithlimits{argmin}_{\phi, \omega} \left\{ \mathcal{L}_t(\phi) + \frac{c_{\mathrm{reg}}}{|\mathcal{D}_{\mathfrak{A},t_j}|} \sum_{(x,a,y) \in \mathcal{D}_{\mathfrak{A},t_j}} \Big( \underbrace{\phi(x,a)^\mathsf{T} \omega}_{:=f(x,a)} - y \Big)^2 \right\}\\
&= \operatornamewithlimits{argmin}_{\phi, \omega} \left\{ c_{\mathrm{reg},\mathcal{L}}\,\mathcal{L}_t(\phi) + \frac{1}{|\mathcal{D}_{\mathfrak{A},t_j}|} \sum_{(x,a,y) \in \mathcal{D}_{\mathfrak{A},t_j}} \Big( \underbrace{\phi(x,a)^\mathsf{T} \omega}_{:=f(x,a)} - y \Big)^2 \right\}.
\end{aligned}
\end{equation}
for some $c_{\mathrm{reg},\mathcal{L}}, c_{\mathrm{reg}}>0$.\footnote{In the experiments, we use scaling of the representation loss instead of MSE.}
We recall that we compute the MSE regression loss using the explorative samples $\mathcal{D}_{\mathfrak{A},t_j}$ collected when playing the base algorithm $\mathfrak{A}$. As mentioned in the main paper, we use this separation to prevent the NN $f(x,a)$ to focus only on predicting optimal rewards when the the empirical distribution of the samples collapses towards the optimal actions (i.e., catastrophic forgetting).
On the other hand, we can use all the samples $\mathcal{D}_t = \mathcal{D}_{\mathfrak{A},t} \cup \mathcal{D}_{\mathrm{glrt},t}$ to compute the loss, where we want to leverage the bias/shift of the empirical distribution towards optimal actions to compute the empirical design matrix $V_t(\phi)$.
Concerning the loss $\mathcal{L}_t$, we leverage the same concepts used in \textsc{BanditSRL}\xspace but we slightly modify them to make it more amenable for NN training. To optimize $\mathcal{L}_{\mathrm{eig},t}(\phi)$ we leverage the fact that $\lambda_{\min}(M) = \min_z R(M,z)$, where $R(M,z) = \frac{z^\mathsf{T} M z}{z^\mathsf{T} z}$ is the Rayleigh quotient. We thus threat $z$ as a parameter and optimize it by gradient descent, leading to
\begin{align}\label{eq:ray.loss.app}
\mathcal{L}_{\mathrm{ray},t}(\phi) = \frac{-1}{|\mathcal{D}_{t_j}| }\min_{z \in \mathbb{R}^d}
\frac{z^\mathsf{T}}{\|z\|_2} \left( \lambda I_d +
\sum_{(x,a,y) \in \mathcal{D}_t} \frac{\phi(x,a)\phi^\mathsf{T}(x,a)}{\|\phi(x,a)\|_2^2}
\right) \frac{z}{\|z\|_2}
\end{align}
We normalize the empirical design matrix to prevent features norms to grow unbounded. On the other hand, since the idea behind $\mathcal{L}_{\mathrm{weak},t}(\phi)$ is to force the optimal features to span all the features we use a mixed approach to compute the loss. We leverage all the samples to compute the matrix $V_{t}(\phi)$, while we use the explorative samples $\mathcal{D}_{\mathfrak{A},t}$ to compute the quadratic form in $V_t$ and avoid it collapses to evaluate only optimal actions. Then,
\begin{align}\label{eq:weak.loss.app}
\mathcal{L}_{\mathrm{weak},t}(\phi) = \frac{-1}{|\mathcal{D}_{t_j}| } \min_{(\wb{x},\wb{a},\wb{y}) \in \mathcal{D}_{\mathfrak{A},t}} & \texttt{stop-grad} \left( \frac{\phi(\wb{x},\wb{a})^\mathsf{T}}{\|\phi(\wb{x},\wb{a})\|_2} \right) \left( \lambda I_d +
\sum_{(x,a,y) \in \mathcal{D}_t} \frac{\phi(x,a)\phi^\mathsf{T}(x,a)}{\|\phi(x,a)\|_2^2}
\right) \nonumber \\
& \texttt{stop-grad} \left( \frac{\phi(\wb{x},\wb{a})}{\|\phi(\wb{x},\wb{a})\|_2} \right)
\end{align}
Where we apply the $\texttt{stop-grad}$ operator on the outer features to only backpropagate gradient through the covariance matrix.
We notice that the loss $\mathcal{L}_{\mathrm{weak},t}$ resemble the $\mathcal{L}_{\mathrm{eig},t}$ loss with the difference of being evaluated on the observed features rather than all the possible vectors in $\mathbb{R}^d$. We can optimize Eq.~\ref{eq:opt.unconstrained.appendix} by stochastic gradient descent using mini-batches but \emph{we don't compute the gradient w.r.t.\ the outer features} $\phi(\wb{x}, \wb{a})$.
Finally, nothing changes in term of base algorithm $\mathfrak{A}$ that now receives in input the trained NN $f_j$ that can be used to extract the representation $\phi_j$ (that is fix through the entire phase). In the experiments, we use the standard \textsc{LinUCB}\xspace and $\epsilon$-greedy algorithms to perform exploration given the representation $\phi_j$.
\section{Experiments}\label{app:experiments}
In this section, we report additional information about the experiments. We recall that in all the experiments, we do a warm start of the base algorithm $\mathfrak{A}$ every time the representation changes using all the samples $\mathcal{D}_t$.
\subsection{Linear Benchmarks}
\paragraph{Parameters.} In all the experiments, we consider all the theoretical parameters, e.g., $\gamma=2$, $\mathrm{d}lta =0.01$ and $\lambda =1$. For $\epsilon$-greedy we use the schedule $\epsilon_t = t^{-1/3}$. For all the algorithms based on upper-confidence bound, we use the theoretical UCB value:
\begin{equation}\label{eq:linucb.ucb}
\mathrm{UCB}_t(x,a,\phi) = \phi(x,a)^\mathsf{T} \theta_{\phi,t-1} + C_{\mathrm{UCB},t} \|\phi(x,a)\|_{V_{t-1}^{-1}(\phi)}
\end{equation}
where $C_{\mathrm{UCB},t} = \alpha_{\mathrm{UCB}}\,\sigma\sqrt{2\ln\left(\frac{\mathrm{d}t(V_{t-1}(\phi))^{1/2}\mathrm{d}t(\lambda I_{d_{\phi}})^{-1/2}}{\mathrm{d}lta}\right)} + \sqrt{\lambda} B_{\phi}$, $\alpha_{\mathrm{UCB}}=1$ and $\sigma$ is the standard deviation of the reward noise.
\paragraph{Varying dimension experiment.}
We providing additional information about the ``varying dimension'' problem introduced in~\citep{PapiniTRLP21hlscontextual}. This problem consists of six realizable representations with dimension from $2$ to $6$. Of the two representations of dimension $d = 6$, one is \textsc{HLS}\xspace. In addition seven misspecified representations are available: one considering half of the features of the \textsc{HLS}\xspace{} representation, one with a third of the same representation, and the five remaining are randomly generated representations with dimensions $3$, $9$, $12$, $12$, $18$. The reward noise is drawn from a zero-mean Gaussian distribution with standard deviation $\sigma=0.3$. All the results of the experiments can be found in the Sec.~\ref{sec:exp.and.practical.algo}.
\paragraph{Mixing Representations.}
To provide a fair and comprehensive analysis, we also report the performance of the algorithms when none of the representations is \textsc{HLS}\xspace{} but a combination of them is. We consider the same problem in~\citep{PapiniTRLP21hlscontextual}, where there are six realizable representations of the same dimension $d=6$, none of which is \textsc{HLS}\xspace{}, but a mixture of them is \textsc{HLS}\xspace. We set $\sigma=0.3$ for the reward noise. In this case, \textsc{Leader}\xspace{} outperforms \textsc{BanditSRL}\xspace{} and achieves constant regret (see Fig.~\ref{fig:mixing}). While \textsc{Leader}\xspace{} is able to select a different representation for each context and mix them, \textsc{BanditSRL}\xspace{} is only able to select a single representation for all the contexts and suffers sublinear regret. As mentioned before, this is both an advantage and drawback of \textsc{Leader}\xspace{} since it needs to solve an optimization problem over representations for each context.
\begin{figure}
\caption{Cumulative regret of the algorithms in the mixing representation experiment, averaged over $40$ repetitions.}
\label{fig:mixing}
\end{figure}
\subsection{Non-Linear Benchmarks}
\paragraph{Baselines.}
As baselines we consider \textsc{LinUCB}\xspace and $\epsilon$-greedy with neural network and Random Fourier Features, the inverse gap weighting (IGW) strategy~\citep[e.g.,][]{Foster2020beyond,SimchiLevi2020falcon}, NeuralUCB~\citep{Zhou2020neural} and Neural-ThomposonSampling~\citep{RiquelmeTS18}.
All the algorithms are implemented using the same phased schema of \mathrm{d}epalgo{}.
\emph{Neural-LinUCB} fits a model to minimize the MSE and compute the UCB on the last layer of the NN.
\emph{NeuralTS} performs randomized exploration on the last layer of the neural network, trained to minimize the MSE or our regularized problem. The exploration strategy is defined by the following two steps:
\begin{align*}
&\wt{\theta} \sim \mathcal{N}(\theta_{\phi,t-1}, C_{\mathrm{UCB},t}^2 V_{t-1}^{-1}(\phi)),\\
&a_t = \operatornamewithlimits{argmax}_a \phi(x_t,a)^\mathsf{T} \wt{\theta}
\end{align*}
The \emph{IGW strategy}~\citep[e.g.,][]{Foster2020beyond,SimchiLevi2020falcon} trains the network to minimize the MSE and, at each time $t$, it plays an action $a_t$ sampled from the following distribution
\[
p_t(a) = \begin{cases}
\frac{1}{A + \gamma_1 t^{\gamma_2} (\max_{a'} f_{j_t}(x,a') - f_{j_t}(x,a))} & \text{if } a \neq a^+_x := \operatornamewithlimits{argmax}_{a'} f_{j_t}(x,a')\\
1 - \sum_{a \neq a^+_x} p_t(a) & \text{otherwise}
\end{cases}
\]
Note that the network is kept fix during a phase, i.e., we do not refit the linear part at each step. We also tested the variant of IGW where we refit the last layer at each time step (see Fig.~\ref{fig:ablation.igw}).
We did not use the theoretical scaling factor (encoded here by $\gamma_1$ and $\gamma_2$) since it would be prohibitively large.
\emph{NeuralUCB}~\citep{Zhou2020neural} is similar to Neural-\textsc{LinUCB}\xspace but uses a bonus constructed with the whole gradient of the neural network. It thus selects the action that maximizes the following index
\begin{equation}
\label{eq:neuralucb.ucb}
\mathrm{UCB}_t^{\mathrm{NeuralUCB}}(x,a) = f_{j_t}(x,a) + \alpha_{\mathrm{UCB}}^{\mathrm{NeuralUCB}} \|\nabla f_{j_t}(x,a)\|_{V_{t-1}^{-1}}
\end{equation}
where $V_{t-1}^{-1}(f) = \sum_{k=1}^{t-1} \mathrm{diag}\Big( \nabla f_{j_k}(x_k,a_k) \nabla f_{j_k}(x_k,a_k)^\mathsf{T} \Big)$. While we use the theoretical bonus factor for Neural-\textsc{LinUCB}\xspace{} and \mathrm{d}epalgo{}, here we treat the bonus factor completely as an hyperparameter since the true factor is prohibitively large. This is a clear advantage we provide to NeuralUCB.
We further compare our algorithm against stochastic linear bandit algorithms (i.e., $\epsilon$-greedy and \textsc{LinUCB}\xspace) using random Fourier features~\citep{RahimiR07}. We define $\phi(x,a) = W \,[x,a] + b$ with $[x,a] \in \mathbb{R}^m$ being the vector obtained from the concatenation of $x$ and $a$, $W \in \mathbb{R}^{d \times m}$ is random matrix and $b \in \mathbb{R}^d$ is a random vector.
\paragraph{\mathrm{d}epalgo.} We tested our algorithm with standard baseline methods: LinUCB, $\epsilon$-greedy and IGW. LinUCB uses the theoretical parameters (see~\eqref{eq:linucb.ucb}) while the parameters for the other methods are reported below. As explained, we fix the representation $\phi_j$ for the epoch but we refit the linear parameter at each step.
\paragraph{Parameters.}
In all the experiments, we used the following parameters:
\begin{center}
\small
\begin{tabular}{lc}
\hline
Name & Value\\
\hline
Phase schedule $\gamma$ & $1.2$\\
Bonus parameter $\sigma$ & $0.2$ for wheel, $0.5$ for datasets\\
Scale factor GLRT (i.e., $\alpha_{\mathrm{GLRT}} \beta_{t-1,\mathrm{d}lta}(\phi)$) & $\{1,2,5,10,15\}$\\
Scale factor UCB (i.e., $\alpha_{\mathrm{UCB}}$ in Eq.~\ref{eq:linucb.ucb}) & $\{1,2\}$\\
$\epsilon_t$ for $\epsilon$-greedy & $\{t^{-1/3}, t^{-1/2}\}$\\
Loss regularization for \mathrm{d}epalgo ($c_{\mathrm{reg},\mathcal{L}}$) & $1$\footnotemark\\
NN layers & $[50,50,50,50,10,1]$\\
NN activation & ReLu\\
Batch size & $128$\\
Optimizer & SGD with learning rate $0.001$ ($0.0001$ for Covertype)\\
Regularizer least-square & $\lambda=1$\\
Buffer capacity & $T$\\
Scale factor for IGW (i.e., $\gamma_1$) & $\{1,10,50,100\}$\\
Exploration rate for IGW (i.e., $\gamma_2$) & $\{1/3, 1/2\}$\\
Scale factor for NeuralUCB ($\alpha_{\mathrm{UCB}}^{\mathrm{NeuralUCB}}$ in Eq.~\ref{eq:neuralucb.ucb}) & $\{0.1,1,2,5\}$\\
Random Fourier Features dimension ($d$) & $\{100, 300\}$\\
\hline
\end{tabular}
\end{center}
\footnotetext{
Note that in the code we add the regularization on the loss $\mathcal{L}_t$ and not on the MSE.
}
All the algorithms are implemented using Pytorch~\citep{PaszkeGMLBCKLGA19pytorch}.
\paragraph{Domains.} We considered the standard domains used in previous papers~\citep[e.g.,][]{RiquelmeTS18,Zhou2020neural}.
\textit{Wheel domain.} In~\citep{RiquelmeTS18}, the authors designed a synthetic non-linear contextual bandit problem where exploration is fundamental. Contexts are samples uniformly from the unit circle in $\mathbb{R}^2$ and $|\mathcal{A}|=5$ are available. The first action $a_1$ has reward $\mu(x,a_1) = \mu_1$ for all $x$. The other actions have reward $\mu_i$ when $\|x\|_2 \leq C_r$. If $\|x\|_2 > C_r$, the sign of $x_1 x_2$ defines the optimal action. For example, $a_2$ is optimal when $x_1,x_2>0$, $a_3$ if $x_1 >0$ and $x_2 <0$ and so on. When an action $a_i \neq a_1$ is optimal the reward is $\mu_3$, otherwise is $\mu_2$ ($a_1$ has always reward $\mu_1$). We set $\mu_1=1,\mu_2=0.8,\mu_3=1.2$ and $C_r=0.5$. The reward noise is drawn from a zero-mean Gaussian distribution with standard deviation $\sigma=0.2$. For the experiments, we consider a finite subset of contexts by sampling $X=100$ contexts at the beginning of the experiment. All the repetitions are done with the same bandit problem (i.e., contexts are fix). We samples contexts accordingly to a uniform distribution $\rho = U(\{1,\ldots, X\})$.
The features $\phi$ are obtained by concatenating the context with a one-hot encoding of the action ($d_{\phi}=7$). Let $1_i$ be the vector of dimension $5$ with all zeros except a one in position $i$, then $\phi(x,a_i) = [x,1_{i-1}]$, for all $x$, $i=1,\ldots,5$.
\textit{Dataset-based domain.} We evaluate our algorithm on standard dataset-based environments~\citep[e.g][]{RiquelmeTS18,Zhou2020neural} from the UCI repository~\citep{Blackard1998cover,Bock2004telescope,schlimmer1987concept,Dua:2019}: MAGIC Gamma Telescope Data Set, Mushroom, Statlog (Shuttle) Data Set, Covertype Data Set. We use the classical multiclass-to-bandit conversion. We use noisy rewards with Bernoulli distribution $Bern(p)$ where $p=0.9$ if the action is equal to the correct label for the sample $x$, $p=0.1$ otherwise. The features are obtained by replicating the context $|\mathcal{A}|$-times, leading to a dimension $d = d_{\mathcal{X}}|\mathcal{A}|$ where $d_{\mathcal{X}}$ is the dimension of the context. We samples contexts accordingly to a uniform distribution $\rho = U(\mathcal{X})$. We report the characteristic of the datasets after an initial preprocessing.
\begin{center}
\begin{tabular}{ccccc}
\hline
&Covertype & Magic & Mushroom & Statlog (Shuttle)\\
\hline
Number of contexts $|\mathcal{X}|$ & 581012 & 19020 & 8124 & 58000\\
Context dimension $d_{\mathcal{X}}$ & 54 & 10 & 22 & 9\\
Number of actions $|\mathcal{A}|$ & 7 & 2 & 2 & 7\\
Feature dimension $d$ & 378 & 20 & 44 & 63\\
\hline
\end{tabular}
\end{center}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:ablation.fixglrt}
\end{figure}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:ablation.igw}
\end{figure}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:ablation.multiglrt.ts}
\end{figure}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:ablation.multiglrt}
\end{figure}
\subsubsection{Additional Experiments and Ablation}
In this section we provide additional experiments and comparisons for \mathrm{d}epalgo. The overall message is that there always exists a configuration of \mathrm{d}epalgo{} that works well across domains and outperforms the base algorithms.
We start noticing that $\epsilon$-greedy often outperforms \textsc{LinUCB}\xspace. Randomization at the level of actions is particularly efficient in these domains since the dimension of the output layer of the NN is always larger than the number of actions. This provides an advantage to $\epsilon$-greedy since it needs to perform less exploration. Furthermore, the GLRT prevents $\epsilon$-greedy to over explore.
In the main paper we have only reported results using the theoretical configurations of the base algorithms ($\epsilon_t = t^{-1/3}$ and $\alpha_{\mathrm{UCB}}=1$). Fig.~\ref{fig:ablation.fixglrt} shows that \mathrm{d}epalgo{} with $\alpha_{\mathrm{GLRT}}=5$ is robust to variations of the base algorithm. In particular, it outperforms or performs comparably to the base algorithm and the baselines in all the experiments. The interesting thing to notice is that the different domains require a different level of exploration. The wheel domain requires a high level of exploration ($\alpha_{\mathrm{UCB}}=2$ and $\epsilon_t =t^{-1/3}$), while the algorithms performs better with little exploration in mushroom ($\alpha_{\mathrm{UCB}}=0.1$ and $\epsilon_t =t^{-1/2}$). We can notice that Random Fourier Features performs poorly in almost all the experiments, supporting the need of representation learning. It may be however possible to obtain better performance by using a much higher number of features. Finally, Fig.~\ref{fig:ablation.igw} shows the behavior of \mathrm{d}epalgo{} with IGW strategy for different values of $\gamma_1$ and $\gamma_2$. Interestingly, it outperforms the best version of the IGW strategy based MSE.
The second experiment aims to highlight the impact of the GLRT on the behavior of \mathrm{d}epalgo (Fig.~\ref{fig:ablation.multiglrt}).
We can notice that the GLRT plays an important role in Neural-$\epsilon$-greedy (see also Fig.~\ref{fig:appendix_glrt_loss_ab}), in particular when using the theoretical exploration rate $t^{-1/3}$ where it significantly improve the performance. On the other hand, the GLRT may trigger too many times when $\alpha_{\mathrm{GLRT}}=1$, leading to under-exploration and worse regret. Note that there are potentially other confounding factors leading to this undesired behavior. For example, the fact we use only exploratory data may lead to suboptimal fitting of the reward if the GLRT triggers too early.
Indeed, as soon as we increase the GLRT scale factor (i.e., $\alpha_{\mathrm{GLRT}} \geq 2$), we do not see anymore a negative impact.
In general, better and more consistent results are obtained with the theoretical exploration rate $t^{-1/3}$ where over exploration is prevented by the GLRT.
The GLRT plays a milder role for \textsc{LinUCB}\xspace-based algorithms (see also Fig.~\ref{fig:appendix_glrt_loss_ab}). Indeed, \citep{PapiniTRLP21hlscontextual} showed that \textsc{LinUCB}\xspace is able to take advantage of the \textsc{HLS}\xspace{} property and does not requires a GLRT mechanism to achieve constant regret. The overall message is to set the GLRT scale factor to a value larger than the theoretical one (and larger than the one used for \textsc{LinUCB}\xspace-based algorithms). Similar results can be derived for Thompson Sampling.
To further investigate the behavior of \mathrm{d}epalgo, we performed an ablation study w.r.t.\ the losses $\mathcal{L}_{\mathrm{ray}}$ and $\mathcal{L}_{\mathrm{weak}}$ (see Eq.~\ref{eq:ray.loss.app}-\ref{eq:weak.loss.app}) and the contribution of the GLRT (i.e., $\alpha_{\mathrm{GLRT}} \in \{0,5\}$), see Fig.~\ref{fig:appendix_glrt_loss_ab}-\ref{fig:appendix_glrt_loss_ab.ts}. We can see for Neural-$\epsilon$-greedy that the GLRT plays a fundamental role in avoiding over exploration. Furthermore, the regularization improves or at least does not degrade the performance of the algorithm. As mentioned before for \textsc{LinUCB}\xspace-based algorithms, the GLRT does not play an important role. On the other hand, these experiments show the importance of the spectral regularization. We can indeed notice a clear separation between the performance of the algorithm with and without regularization.
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:appendix_glrt_loss_ab}
\end{figure}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:appendix_glrt_loss_ab.ts}
\end{figure}
\subsubsection{Network study on the Wheel Domain}
To further investigate the behavior of \mathrm{d}epalgo, we performed an ablation study w.r.t.\ the network structure.
Let's start considering $\epsilon$-greedy algorithms. Fig.~\ref{fig:netablation.wheel.greedy.linucb} that the performance of these algorithms does not vary much across the experiments. However, there are interesting things to notice. When the embedding layer is large (1000,100), the regularization and GLRT do not help and \mathrm{d}epalgo{} behaves as the Neural-$\epsilon$-greedy algorithm. Indeed it may be difficult to recover spectral properties for such a large representation (the original feature dimension is 7). Similarly the GLRT scales with the dimension $d$, the higher $d$ the larger may be the time to trigger the test. When the embedding dimension is smaller, we can see an improved performance for \mathrm{d}epalgo{} compared to the base algorithm. The best regret is obtained with the deepest network and smallest embedding dimension (i.e., 10). In particular, we can see a flattening curve for \mathrm{d}epalgo{} with net $[50,50,50,50,10]$ that is not observe with embedding dimension $50$.
\textsc{LinUCB}\xspace-based algorithms suffer when the embedding dimension is large (i.e., 1000, 100) since it needs to perform much more exploration compared to $\epsilon$-greedy. Indeed, $\epsilon$-greedy only needs to do exploration at the level of the 5 actions, while \textsc{LinUCB}\xspace needs to explore the $d$-dimensional space. An interesting behavior is observed with deeper networks. In particular, we observe a better performance with embedding dimension 50 rather than 10. We think that with dimension 10 the network has a larger misspecification that compromises the exploration performed by $\textsc{LinUCB}\xspace$-based algorithms. Indeed, Fig.~\ref{fig:netablation.wheel.linucb.ts} shows that both \mathrm{d}epalgo{} and Neural-\textsc{LinUCB}\xspace{} show a linear regret. This demonstrates that i) \textsc{LinUCB}\xspace-based algorithms are much more sensible to the misspecification than $\epsilon$-greedy; ii) it is important to carefully select the embedding dimension $d$ (the larger the higher the level of exploration but the smaller the misspecification). On the other hand, when $d=50$, \textsc{LinUCB}\xspace-based algorithms perform comparably to $\epsilon$-greedy. While with a shallow network (i.e., $[50,50,50]$) we observe a small improvement in using \mathrm{d}epalgo{}, the advantages of \mathrm{d}epalgo{} becomes extremely clear with the deep network (i.e., $[50,50,50,50,50]$) where it achieves more than half of the regret of Neural-\textsc{LinUCB}\xspace.
Finally, Fig.~\ref{fig:netablation.wheel.linucb.ts} shows that, similarly to $\epsilon$-greedy, Thompson Sampling works better with smaller dimensions (in particular 10) where we can always observe a smaller regret for \mathrm{d}epalgo{}.
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:netablation.wheel.greedy.linucb}
\end{figure}
\begin{figure}
\caption{Ablation study of \mathrm{d}
\label{fig:netablation.wheel.linucb.ts}
\end{figure}
\section{Examples of No-regret Algorithms}
We prove that LinUCB and $\epsilon$-greedy satisfy Assumption \ref{asm:no-regret-algo}. Then, we instantiate our general regret bounds (i.e., we bound $\tau_{\mathrm{alg}}$ defined in Lemma \ref{lem:scale-tau-hls}) for these specific algorithms.
\subsection{LinUCB}
\begin{theorem}[Regret bound of anytime LinUCB, Prop. 1 in \citep{PapiniTRLP21hlscontextual}]\label{th:regret-linucb}
Let $\phi\in \Phi^\star$ be any realizable representation. With probability $1-\mathrm{d}lta$, for any $T\in\mathbb{N}$, the regret of anytime LinUCB run with representation $\phi$, confidence $\mathrm{d}lta$, and threshold $\beta_{t,\mathrm{d}lta}(\phi)$ is bounded as
\begin{align*}
R_T \leq \wb{R}_{\mathrm{LinUCB}}(T, \phi, \mathrm{d}lta), =: \frac{128\lambda B_\phi^2\sigma^2\left(2\log(1/\mathrm{d}lta)+d_\phi \log(1+TL_\phi^2/(\lambda d_\phi))\right)^2}{\Delta}.
\end{align*}
\end{theorem}
\begin{proof}
Just apply Proposition 1 in \citep{PapiniTRLP21hlscontextual} while noting that the maximum per-step regret is $2$ in our context.
\end{proof}
\begin{lemma}\label{lem:tau-alg-linucb}
When using the LinUCB algorithm, we have
\begin{align*}
\tau_{\mathrm{alg}} \lesssim \frac{L_{\phi^\star}^2 d^2 \log(|\Phi|/\mathrm{d}lta)^2}{\lambda^\star(\phi^\star)\Delta^2}.
\end{align*}
\end{lemma}
\begin{proof}
First note that, by Theorem \ref{th:regret-linucb},
\begin{align*}
\wb{R}_{\mathrm{LinUCB}}(t, \phi, \mathrm{d}lta_{\log_2(t)}/|\Phi|) \lesssim \frac{d_\phi^2\log(t|\Phi|/\mathrm{d}lta)^2}{\Delta}.
\end{align*}
Then, the result follows by applying Lemma \ref{lem:ineq-log-sqrt}.
\end{proof}
\subsection{$\epsilon$-greedy}\label{app:egreedy.analysis}
\begin{theorem}[Regret bound of $\epsilon$-greedy]\label{th:regret-epsgreedy}
Let $\phi\in \Phi^\star$ be any realizable representation. With probability $1-\mathrm{d}lta$, for any $T\in\mathbb{N}$, the regret of $\epsilon$-greedy run with representation $\phi$, confidence $\mathrm{d}lta$, and forcing schedule $(\epsilon_t)_{t\geq 1}$ with $\epsilon_t = 1/t^{1/3}$ is bounded as
\begin{align*}
R_T \leq \wb{R}_{\epsilon\mathrm{-greedy}}(T, \phi, \mathrm{d}lta), &=: 2\beta_{T,\mathrm{d}lta/3}(\phi) \left( \frac{L_\phi}{\sqrt{\lambda}} \left(\frac{128L_\phi^2 A\sqrt{\log(12d_\phi/\mathrm{d}lta)}}{\Gamma(\phi)}\right)^8 + \frac{2L_\phi}{\sqrt{\lambda}} + \frac{3L_\phi \sqrt{A} T^{2/3}}{\sqrt{\Gamma(\phi)}} \right)
\\ &+ 2\sqrt{T\log(6T/\mathrm{d}lta)} + 3 T^{2/3},
\end{align*}
where $\Gamma(\phi) := \lambda_{\min}\left(\mathbb{E}_{x\sim\rho}\left[\sum_{a\in\mathcal{A}}\phi(x,a)\phi(x,a)^\mathsf{T}\right]\right)$ and $\beta_{T,\mathrm{d}lta}(\phi) := \sigma\sqrt{2\log(1/\mathrm{d}lta)+d_{\phi}\log(1+TL_{\phi}^2/(\lambda d_{\phi}))} + \sqrt{\lambda}B_{\phi}$.
\end{theorem}
\begin{proof}
Let $F_t$ be the event under which the algorithm plays greedily at time $t$. Then,
\begin{align*}
R_T = \underbrace{\sum_{t=1}^T \indi{F_t} \Delta(x_t,a_t)}_{(a)} + \underbrace{\sum_{t=1}^T \indi{\neg F_t} \Delta(x_t,a_t)}_{(b)}.
\end{align*}
Let us start from (a). With probability at least $1-\mathrm{d}lta$, we have that, under $F_t$,
\begin{align*}
& \Delta(x_t,a_t) = \max_{a\in\mathcal{A}}\mu(x_t,a) - \mu(x_t,a_t)
\\ & \quad \leq \max_{a\in\mathcal{A}} \left(\langle \theta_{\phi,t-1}, \phi(x_t,a)\rangle + \beta_{t-1,\mathrm{d}lta}(\phi)\|\phi(x_t,a)\|_{V_{t-1}^{-1}(\phi)}\right) - \langle \theta_{\phi,t-1}, \phi(x_t,a_t)\rangle + \beta_{t-1,\mathrm{d}lta}(\phi)\|\phi(x_t,a_t)\|_{V_{t-1}^{-1}(\phi)}
\\ & \quad\leq \max_{a\in\mathcal{A}} \langle \theta_{\phi,t-1}, \phi(x_t,a)\rangle - \langle \theta_{\phi,t-1}, \phi(x_t,a_t)\rangle + 2\max_{a\in\mathcal{A}}\beta_{t-1,\mathrm{d}lta}(\phi)\|\phi(x_t,a)\|_{V_{t-1}^{-1}(\phi)}
\\ & \quad = 2\max_{a\in\mathcal{A}}\beta_{t-1,\mathrm{d}lta}(\phi)\|\phi(x_t,a)\|_{V_{t-1}^{-1}(\phi)},
\end{align*}
where the last equality is because $a_t$ is greedy w.r.t. $\theta_{\phi,t-1}$ under $F_t$. Then,
\begin{align*}
(a) &\leq 2\beta_{T,\mathrm{d}lta}(\phi) \sum_{t=1}^T \indi{F_t} \max_{a\in\mathcal{A}}\|\phi(x_t,a)\|_{V_{t-1}^{-1}(\phi)} \leq 2\beta_{T,\mathrm{d}lta}(\phi) \sum_{t=1}^T \indi{F_t} \frac{L_\phi}{\sqrt{\lambda_{\min}(V_{t-1}(\phi))}}.
\end{align*}
Let $\mathbb{E}_t$ be the expectation operator conditioned on the full history up to round $t-1$ and $\pi_t(a | x) = (1-\epsilon_t)\indi{a = \operatornamewithlimits{argmax}_{a\in\mathcal{A}}\langle \theta_{\phi,t-1}, \phi(x_t,a)\rangle } + \frac{\epsilon_t}{|\mathcal{A}|}$ be the stochastic policy played at time $t$. By Matrix Azuma inequality (Lemma \ref{lem:mazuma}) and a union bound on time, with probability at least $1-\mathrm{d}lta$,
\begin{align*}
& \lambda_{\min}(V_{t-1}(\phi)) \geq \lambda + \lambda_{\min}\left(\sum_{k=1}^{t-1} \mathbb{E}_{k}\left[\phi(x,a)\phi(x,a)^\mathsf{T}\right]\right) - 8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)}
\\ & \quad = \lambda + \lambda_{\min}\left(\sum_{k=1}^{t-1} \mathbb{E}_{x\sim\rho,a\sim\pi_k(\cdot|x)}\left[\phi(x,a)\phi(x,a)^\mathsf{T}\right]\right) - 8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)}
\\ & \quad \geq \lambda + \lambda_{\min}\left(\sum_{k=1}^{t-1} \epsilon_k \mathbb{E}_{x\sim\rho,a\sim\mathcal{U}(\mathcal{A})}\left[\phi(x,a)\phi(x,a)^\mathsf{T}\right]\right) - 8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)}
\\ & \quad = \lambda + \frac{\Gamma(\phi)}{A} \sum_{k=1}^{t-1} \epsilon_k - 8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)}
\\ & \quad \geq \lambda + \frac{\Gamma(\phi)}{A} (t-1)^{2/3} - 8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)},
\end{align*}
where in the last step we used the definition of $\epsilon_k$. We now seek a condition on $t$ such that $8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)} \leq \frac{\Gamma(\phi) (t-1)^{2/3}}{2A}$, so that we have $\lambda_{\min}(V_{t-1}(\phi)) \geq \lambda + \frac{\Gamma(\phi) (t-1)^{2/3}}{2A}$. By the crude bound $\log(x) \leq x^\alpha/\alpha$, we have
\begin{align*}
8L_\phi^2\sqrt{(t-1)\log(4d_\phi (t-1)/\mathrm{d}lta)} \leq 8L_\phi^2\sqrt{(t-1)\log(4d_\phi/\mathrm{d}lta)} + 8L_\phi^2\sqrt{(t-1)^{1+\alpha}/\alpha}.
\end{align*}
Thus, a sufficient condition is that
\begin{align*}
8L_\phi^2\sqrt{(t-1)\log(4d_\phi/\mathrm{d}lta)} &\leq \frac{\Gamma(\phi) (t-1)^{2/3}}{4A} \implies (t-1) \geq \left(\frac{32L_\phi^2 A\sqrt{\log(4d_\phi/\mathrm{d}lta)}}{\Gamma(\phi)}\right)^6,
\\ 8L_\phi^2\sqrt{(t-1)^{1+\alpha}/\alpha} &\leq \frac{\Gamma(\phi) (t-1)^{2/3}}{4A} \implies (t-1) \geq \left(\frac{32L_\phi^2 A\sqrt{1/\alpha}}{\Gamma(\phi)}\right)^\frac{6}{4-3(1+\alpha)}.
\end{align*}
Setting $\alpha=1/12$, we have $\frac{6}{4-3(1+\alpha)} = 8$. Then, a sufficient condition is
\begin{align*}
t \geq z := \left(\frac{128L_\phi^2 A\sqrt{\log(4d_\phi/\mathrm{d}lta)}}{\Gamma(\phi)}\right)^8 + 1.
\end{align*}
Then,
\begin{align*}
\sum_{t=1}^T \indi{F_t} \frac{L_\phi}{\sqrt{\lambda_{\min}(V_{t-1}(\phi))}} \leq z \frac{L_\phi}{\sqrt{\lambda}} + \sum_{t=1}^T \frac{L_\phi}{\sqrt{\lambda + \frac{\Gamma(\phi) (t-1)^{2/3}}{2A}}} &\leq (z+1) \frac{L_\phi}{\sqrt{\lambda}} + \frac{\sqrt{2A}}{\sqrt{\Gamma(\phi)}}\sum_{t=1}^T \frac{L_\phi}{t^{1/3}}
\\ & \leq (z+1) \frac{L_\phi}{\sqrt{\lambda}} + \frac{3L_\phi \sqrt{A}T^{2/3}}{\sqrt{\Gamma(\phi)}}.
\end{align*}
Thus,
\begin{align*}
(a) \leq 2\beta_{T,\mathrm{d}lta}(\phi) \left( \frac{L_\phi}{\sqrt{\lambda}} \left(\frac{128L_\phi^2 A\sqrt{\log(4d_\phi/\mathrm{d}lta)}}{\Gamma(\phi)}\right)^8 + \frac{2L_\phi}{\sqrt{\lambda}} + \frac{3L_\phi \sqrt{A}T^{2/3}}{\sqrt{\Gamma(\phi)}} \right).
\end{align*}
Let us bound (b). By Azuma's inequality (Lemma \ref{lemma:azuma}), with probability at least $1-\mathrm{d}lta$,
\begin{align*}
(b) &\leq 2\sum_{t=1}^T \indi{\neg F_t} = 2\sum_{t=1}^T \Big(\indi{\neg F_t} - \mathbb{P}(\neg F_t)\Big) + 2 \sum_{t=1}^T \mathbb{P}(\neg F_t)
\\ & \leq 2\sqrt{T\log(2T/\mathrm{d}lta)} + 2\sum_{t=1}^T \epsilon_t = 2\sqrt{T\log(2T/\mathrm{d}lta)} + 2\sum_{t=1}^T \frac{1}{t^{1/3}} \leq 2\sqrt{T\log(2T/\mathrm{d}lta)} + 3 T^{2/3}.
\end{align*}
Summing the bounds on (a) and (b) yields a regret bound that holds with probability at least $1-3\mathrm{d}lta$ by the three concentration events used above. Then, the result follows by a union bound, i.e., by re-defining $\mathrm{d}lta \rightarrow \mathrm{d}lta/3$.
\end{proof}
\begin{lemma}\label{lem:tau-alg-epsgreedy}
When using the $\epsilon$-greedy algorithm (same conditions as in Theorem \ref{th:regret-epsgreedy}), we have
\begin{align*}
\tau_{\mathrm{alg}} \lesssim \frac{L_{\phi^\star}^6 (dA)^{3/2} L^3 \log(|\Phi|/\mathrm{d}lta)^3}{\lambda^\star(\phi^\star)^3\Delta^3}.
\end{align*}
\end{lemma}
\begin{proof}
First note that, by Theorem \ref{th:regret-epsgreedy},
\begin{align*}
\wb{R}_{\epsilon\mathrm{-greedy}}(t, \phi, \mathrm{d}lta_{\log_2(t)}/|\Phi|) \lesssim L_\phi \sqrt{d_\phi A} \log(t|\Phi|/\mathrm{d}lta) t^{2/3},
\end{align*}
where we kept only the higher-order dependences. Then, with similar steps as in the proof of Lemma \ref{lem:ineq-log-sqrt}, one can easily show that $\tau_{\mathrm{alg}}$ requires solving the inequality
\begin{align*}
t \lesssim \frac{L_{\phi^\star}^2}{\lambda^\star(\phi^\star)\Delta} \max_{\phi\in\Phi^\star} L_\phi \sqrt{d_\phi A} \log(|\Phi|/\mathrm{d}lta) t^{2/3},
\end{align*}
which proves the statement.
\end{proof}
\section{Auxiliary Results}
\subsection{Bounding the eigenvalues of the design matrices}
The following result holds for any algorithm (i.e., any arm selection rule) any any representation $\phi$ (even non-realizable). It is an extension of Lemma 9 in \citep{PapiniTRLP21hlscontextual}.
\begin{lemma}\label{lem:bound-design}
Under the assumption that the optimal policy is unique, with probability $1-\mathrm{d}lta$, for all $t$ and $\phi\in\Phi$,
\begin{equation}
V_{t}(\phi) \succeq t\EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}] + \left( \lambda - L_\phi^2 S_t - 8L_\phi^2\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \right) I_{d_\phi},
\end{equation}
\begin{equation}
V_{t}(\phi) \preceq t\EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}] + \left( \lambda + L_\phi^2 S_t + 8L_\phi^2\sqrt{t\log(4d_\phi |\Phi|t/\mathrm{d}lta)} \right) I_{d_\phi},
\end{equation}
where $S_t := \sum_{k=1}^t \indi{a_k\neq \pi^\star(x_k)}$.
\end{lemma}
\begin{proof}
The lower bound holds with probability $1-\mathrm{d}lta/2$ by \cite[][Lemma 9]{PapiniTRLP21hlscontextual}. Let us prove the upper bound. We have
\begin{align*}
V_{t}(\phi) &- \lambda I_{d_\phi} =\sum_{k=1}^t\phi(x_k,a_k)\phi(x_k,a_k)^\mathsf{T} \\
&= \sum_{k=1}^t\indi{a_k\neq\pi^\star(x_k)}\phi(x_k,a_k)\phi(x_k,a_k)^\mathsf{T} + \sum_{k=1}^t \indi{a_k=\pi^\star(x_k)} \phi(x_k,a_k)\phi(x_k,a_k)^\mathsf{T} \\
&\preceq \sum_{k=1}^t\indi{a_k\neq\pi^\star(x_k)}\phi(x_k,a_k)\phi(x_k,a_k)^\mathsf{T} + \sum_{k=1}^t \phi(x_k,\pi^\star(x_k))\phi(x_k,\pi^\star(x_k))^\mathsf{T} \\
&\preceq L_\phi^2 S_t I_{d_\phi}
+ \sum_{k=1}^t\phi(x_k,\pi^\star(x_k))\phi(x_k,\pi^\star(x_k))^\mathsf{T} \\
&\preceq L_\phi^2 S_t I_{d_\phi} + t\EV_{x \sim \rho}[\phi(x,\pi^\star(x))\phi(x,\pi^\star(x))^\mathsf{T}] + 8L_\phi^2\sqrt{t\log(4d_\phi t/\mathrm{d}lta)} I_{d_\phi},
\end{align*}
where the second-last inequality uses the boundedness of $\phi$, while the last one holds with probability $1-\mathrm{d}lta/2$ for all $t$ by Lemma \ref{lem:mazuma} and a union bound. The result follows by a union bound on $\Phi$ and on the two sides of the inequality.
\end{proof}
\subsection{Martingale concentration}
We restate some well-known martingale concentration bounds.
\begin{lemma}[Azuma's inequality]\label{lemma:azuma}
Let $\{(Z_t,\mathcal{F}_t)\}_{t\in\mathbb{N}}$ be a martingale difference sequence such that $|Z_t| \leq a$ almost surely for all $t\in\mathbb{N}$. Then, for all $\mathrm{d}lta \in (0,1)$,
\begin{align*}
\mathbb{P}\left(\forall t \geq 1 : \left|\sum_{k=1}^t Z_k \right| \leq a\sqrt{t \log(2t/\mathrm{d}lta)} \right) \geq 1-\mathrm{d}lta.
\end{align*}
\end{lemma}
\begin{lemma}[Freedman's inequality]\label{lemma:freedman}
Let $\{(Z_t,\mathcal{F}_t)\}_{t\in\mathbb{N}}$ be a martingale difference sequence such that $|Z_t| \leq a$ almost surely for all $t\in\mathbb{N}$. Then, for all $\mathrm{d}lta \in (0,1)$,
\begin{align*}
\mathbb{P}\left(\forall t \geq 1 : \left|\sum_{k=1}^t Z_k \right| \leq 2\sqrt{\sum_{k=1}^t \mathbb{V}_k[Z_k] \log(4t/\mathrm{d}lta)} + 4a\log(4t/\mathrm{d}lta) \right) \geq 1-\mathrm{d}lta.
\end{align*}
\end{lemma}
\begin{lemma}[Matrix Azuma's inequality]\label{lem:mazuma}
Let $\{X_k\}_{k=1}^t$ be a finite adapted sequence of symmetric matrices of dimension $d$, and $\{C_k\}_{k=1}^t$ a sequence of symmetric matrices such that for all $k$, $\EV_{k}[X_k]=0$ and $X_k^2\preceq C_k^2$ almost surely. Then, with probability at least $1-\mathrm{d}lta$,
\begin{equation}
\lambda_{\max}\left(\sum_{k=1}^tX_k\right) \le \sqrt{8\norm{\sum_{k=1}^tC_k^2}\log(d/\mathrm{d}lta)}.
\end{equation}
\end{lemma}
\end{document}
|
math
|
سۄن رۄپھ لال جوٲہر تہٕ آرٲیشہِ ہٕنٛدۍ سؠٹھاہ مولٕلۍ چٟز کٲفی مقدارَس منٛز
|
kashmiri
|
\begin{document}
\title{MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data}
\begin{abstract}
Time series forecasting is widely used in business intelligence, e.g.,
forecast stock market price, sales, and help the analysis of
data trend. Most time series of interest are macroscopic time series
that are aggregated from microscopic data. However, instead of directly
modeling the macroscopic time series, rare literature studied the forecasting
of macroscopic time series by leveraging data on the microscopic level. In this paper,
we assume that the microscopic time series follow some unknown mixture
probabilistic distributions. We theoretically show that as we identify
the ground truth latent mixture components, the estimation of time series
from each component could be improved because of lower variance, thus
benefitting the estimation of macroscopic time series as well. Inspired
by the power of Seq2seq and its variants on the modeling of time series
data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to
cluster microscopic time series, where all the components come from
a family of Seq2seq models parameterized by different parameters.
Extensive experiments on both synthetic and real-world data
show the superiority of our approach.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Time series forecasting has proven to be important to help people manage
resources and make decisions~\cite{lim2020time}. For example, probabilistic forecasting of
product demand and supply in retails~\cite{chen2019much}, or the forecasting of
loans~\cite{abu1996introduction} in a financial institution can help people do
inventory or financing planning to maximize the profit. Most time series of interest
are macroscopic time series, e.g., the sales of an online retail platform,
the loans of a financial institution, or the number of infections caused by
some pandemic diseases in a state, that are comprised of microscopic time
series, e.g., the sales of a merchant in the
online retail, the loans from a customer given the financial institution, or the
number of infections in a certain region. That is, the observed macroscopic time series
are just the aggregation or sum of microscopic time series.
Although various time series forecasting models, e.g., State Space Models (SSMs)~\cite{durbin2012time},
Autoregressive (AR) models~\cite{asteriou2011arima}, or deep neural networks~\cite{benidis2020neural},
have been widely studied for decades, all of them study the modeling of time series without considering
the connections between macroscopic time series of interest and the underlying
time series on the microscopic level.
In this paper, we study the question whether the forecasting of macroscopic time
series can be improved by leveraging the underlying microscopic time series, and the answer
is yes. Basically, though accurately modeling each microscopic time series could
be challenging due to large variations, we show that by carefully clustering microscopic time series
into clusters, i.e., clustered time series, and using canonical approaches to model each of clusters,
finally we can achieve promising results by simply summing over the forecasting results of each cluster.
To be more specific,
\textbf{first}, we assume that the microscopic time
series are generated from a probabilistic mixture model~\cite{mclachlan1988mixture}
where there exist $K$ components.
The generation of each microscopic time series is by first selecting a component $z$
from $\{1,...,K\}$ with a prior $p(z)$ (a Discrete distribution), then generating the microscopic
observation from a probabilistic distribution $p(x;\Phi_{z}, z)$ parameterized by
the corresponding component $\Phi_z$. We show that
as we can identify the ground truth components of the mixture, and the ground
truth assignment of each microscopic observation, independent modeling of time series data
from each component could be improved due to lower variance, and further benefitting
the estimation of macroscopic time series that are of interest.
\textbf{Second}, inspired by recent successes of Seq2seq
models~\cite{vaswani2017attention,cho2014properties,du2018time}
based on deep neural networks, e.g., variants
of recurrent neural networks (RNNs)~\cite{hewamalage2021recurrent,yu2017long,maddix2018deep},
convolutional neural networks (CNNs)~\cite{bai2018empirical,hao2020temporal}, and
Transformers~\cite{li2019enhancing,wu2020deep},
we propose Mixture of Seq2seq (MixSeq), a mixture model for time series, where the
components come from a family of Seq2seq models parameterized by different
parameters. \textbf{Third}, we conduct synthetic experiments to demonstrate the
superiority of our approach, and extensive experiments on real-world data to show
the power of our approach compared with canonical approaches.
{\bfseries Our contributions}. We summarize our contributions in two-fold. \textbf{(1)} We
show that by transforming the original macroscopic time series via clustering,
the expected variance of each clustered time series could be optimized, thus improving the
accuracy and robustness for the estimation of macroscopic time series.
\textbf{(2)} We propose MixSeq which is an end2end mixture model with each component
coming from a family of Seq2seq models. Our empirical results based on MixSeq
show the superiority compared with canonical approaches.
\section{Background}
In this section, we first give a formal problem definition.
We then review the bases related to this work, and have
a discussion of related works.
{\bfseries Problem definition}.
Let us assume a macroscopic time series $x_{1:t_0} = \left[x_{1},...,x_{t_0}\right]$,
and $x_t \in \mathbb{R}$ denotes the value of time series at time $t$. We aim to predict
the next $\tau$ time steps, i.e., $x_{t_0+1:t_0+\tau}$.
We are interested in the following conditional distribution
\begin{equation}\label{eq:problem}
p(x_{t_0+1:t_0+\tau}|x_{1:t_0}) = \prod_{t=t_0+1}^{t_0+\tau} p(x_t|x_{<t};\Theta),
\end{equation}
where $x_{<t}$ represents $x_{1:t-1}$ in interval $[1,t)$.
To study the above problem, we assume that the
macroscopic time series is comprised of $m$ microscopic time series,
i.e., $x_t = \sum_{i=1}^{m} x_{i,t}$ where $x_{i,t} \in \mathbb{R}$
denotes the value of the $i$-th microscopic time series at time $t$.
We aim to cluster the $m$ microscopic time series into $K$
clustered time series $\left\{ x_{1:t_0}^{(z)} \right\}_{z=1}^{K}$,
where $x_{t}^{(z)} = \sum_{\{i|z_i=z,\forall i\}} x_{i,t}$ given
the label assignment of the $i$-th microscopic time series
$z_i \in \left\{ 1,...,K \right\}$. This is based on
our results in Section~\ref{sec:theory} that
the macroscopic time series forecasting can be improved
with optimal clustering. Hence, instead of directly modeling $p(x_{t_0+1:t_0+\tau})$,
we study the clustering of $m$ microscopic time series in Section~\ref{sec:mixseq},
and model the conditional distribution of clustered time series
$\left\{ p(x_{t_0+1:t_0+\tau}^{(z)}) \right\}_{z=1}^{K}$
with canonical approaches.
\subsection{Seq2seq: encoder-decoder architectures for time series}
\label{sec:seq2seq}
An encoder-decoder based neural network models
the conditional distribution Eq.~\eqref{eq:problem} as a distribution
from the exponential families, e.g., Gaussian, Gamma or Binomial distributions,
with sufficient statistics generated from a neural network.
The encoder feeds $x_{<t}$ into a neural architecture, e.g., RNNs, CNNs
or self-attentions, to generate the representation of historical time series,
denoted as $h_{t}$, then we use a decoder to yield the result $x_{t}$. After $\tau$
iterations in an autoregressive style, it finally generates the whole time series to be predicted.
To instantiate above Seq2seq architecture, we denote $o_{1:t}$, where $o_t \in \mathbb{R}^d$, as covariates that
are known a priori, e.g., dates. We denote $Y_{t} = \left[ x_{1:t-1} \,\,\|\,\, o_{2:t}
\right] \in \mathbb{R}^{(t-1)\times (d+1)}$ where we use $\|$ for concatenation.
The encoder generates the representation $h_{t}$ of $x_{<t}$ via
Transformer~\cite{vaswani2017attention,li2019enhancing} as follows.
We first transform $Y_{t}$ by some functions $\rho(\cdot)$, e.g., causal convolution~\cite{li2019enhancing}
to $H^{(0)} = \rho(Y_{t}) \in \mathbb{R}^{(t-1)\times d_k}$.
Transformer then iterates the following self-attention layer $L$ times:
\begin{equation}\label{eq:transformer}
\begin{aligned}
&H^{(l)} = \mathrm{MLP}^{(l)}(H^{(\mathrm{tmp})}),\,\,
H^{(\mathrm{tmp})} = \mathrm{SOFTMAX}\left( \frac{Q^{(l)} K^{(l)\top}}{\sqrt{d_q}} M \right) V^{(l)},\\
&Q^{(l)} = H^{(l-1)}W_q^{(l)}, K^{(l)} = H^{(l-1)}W_k^{(l)}, V^{(l)} = H^{(l-1)}W_v^{(l)}.
\end{aligned}
\end{equation}
That is, we first transform $Y$\footnote{We ignore the subscript
for simplicity in condition that the context is of clarity.} into query, key, and value
matrices, i.e., $Q = Y W_q$, $K = Y W_k$, and $V = Y W_v$
respectively, where $W_q \in \mathbb{R}^{d_k\times d_q},
W_k \in \mathbb{R}^{d_k\times d_q}, W_v \in \mathbb{R}^{d_k\times d_v}$ in each layer are
learnable parameters. Then we do scaled inner product attention to yield $H^{(l)} \in \mathbb{R}^{(t-1)\times d_k}$ where
$M$ is a mask matrix to filter out rightward attention by setting all
upper triangular elements to $-\infty$. We denote $\mathrm{MLP}(\cdot)$ as a multi-layer perceptron function.
Afterwards, we can generate the representation $h_{t} \in \mathbb{R}^{d_p}$ for $x_{<t}$
via $h_{t} = \nu(H^{(L)})$ where we denote $\nu(\cdot)$ as a deep set function~\cite{zaheer2017deep}
that operates on rows of $H^{(L)}$, i.e., $\nu(\{H_1^{(L)},..., H_{t-1}^{(L)}\})$. We denote
the feedforward function to generate $H^{(L)}$ as $H^{(L)} \sim g(H^{(0)})$, i.e., Eq~\eqref{eq:transformer}.
Given $h_{t}$, the decoder generates the sufficient statistics and finally yields
$x_{t} \sim p(x;\mathrm{MLP}(h_{t}))$ from a distribution in the exponential family.
\subsection{Related works}
{\bfseries Time series forecasting} has been studied for decades.
We summarize works related to time series forecasting into two categories.
\textbf{First}, many models come from the family of autoregressive
integrated moving average (ARIMA)~\cite{box1968some,asteriou2011arima}, where AR
indicates that the evolving variable of interest is regressed on
its own lagged values, the MA indicates that the regression error
is actually a linear combination of error terms, and the ``I''
indicates that the data values have been replaced with the
difference between their values and the previous values to handle
non-stationary~\cite{pemberton1990non}. The State Space Models
(SSM)~\cite{durbin2012time} aim
to use state transition function to model the transfer of states and generate
observations via a observation function. These statistical approaches typically model
time series independently, and most of them only utilize values from history
but ignore covariates that are important signals for forecasting.
\textbf{Second}, as rapid development of deep neural networks, people started
studying many neural networks for the modeling of time series~\cite{benidis2020neural,lim2020time}.
Most successful neural networks are based on the encoder-decoder
architectures~\cite{vaswani2017attention,cho2014properties,du2018time,sutskever2014sequence,bahdanau2014neural,dama2021analysis,lim2020time,ma2019learning}, namely Seq2seq. Basically, various Seq2seq models based
on RNNs~\cite{hewamalage2021recurrent,salinas2020deepar,wen2017multi,lai2018modeling,yu2017long,maddix2018deep},
CNNs~\cite{bai2018empirical,hao2020temporal}, and
Transformers (self-attentions)~\cite{li2019enhancing,wu2020deep}
are proposed to model the non-linearity for time series.
No matter models studied in statistics or deep neural networks,
these works mainly focus on the forecasting of single or multivariate
time series, but ignore the auxiliary information that the time series could be
made up of microscopic data.
{\bfseries Time series clustering} is another topic for exploratory analysis
of time series. We summarize the literature into three categories, i.e.,
study of distance functions, generative models, and feature extraction
for time series. \textbf{First}, Dynamic time wrapping~\cite{petitjean2011global},
similarity metric that measures temporal dynamics~\cite{yang2011patterns}, and specific
measures for the shape~\cite{paparrizos2015k} of time series are proposed
to adapt to various time series characteristics, e.g., scaling and distortion.
Typically these distance functions are mostly manually defined and cannot generalize
to more general settings. \textbf{Second}, generative model based approaches assume
that the observed time series is generated by an underlying model, such as
hidden markov model~\cite{oates1999clustering} or mixture of ARMA~\cite{xiong2004time}.
\textbf{Third}, early studies on feature extraction of time series are based on component
analysis~\cite{guo2008time}, and kernels, e.g.,
u-shapelet~\cite{zakaria2012clustering}. As the development of deep neural networks,
several encoder-decoder architectures~\cite{madiraju2018deep,ma2019learning} are proposed to
learn better representations of time series for clustering.
However, the main purpose of works in this line is to conduct exploratory analysis
of time series, while their usage for time series forecasting has never been studied.
That is, these works define various metrics to evaluate the goodness of the clustering
results, but how to learn the optimal clustering for time series forecasting remains
an open question.
\section{Microscopic time series under mixture model}
\label{sec:theory}
We analyze the variance of mixture models, and further verify
our results with simple toy examples.
\subsection{Analyses on the variance of mixture model}
In this part, we analyze the variance of probabilistic mixture models.
A mixture model~\cite{mclachlan1988mixture} is a probabilistic model for representing
the presence of subpopulations within an overall population.
Mixture model typically consists of a prior that represents
the probability over subpopulations, and components, each of which
defines the probability distribution of the corresponding subpopulation.
Formally, we can write
\begin{equation}\label{eq:mixture}
f(x) = \sum_i p_i \cdot f_i(x),
\end{equation}
where $f(\cdot)$ denotes the mixture distribution, $p_i$ denotes
the prior over subpopulations, and $f_i(\cdot)$ represents the distribution
corresponding to the $i$-th component.
\begin{proposition}
Assuming the mixture model with probability density function $f(x)$, and corrsponding components $\left\{f_i(x)\right\}_{i=1}^K$ with constants $\left\{p_i\right\}_{i=1}^K$ ($\left\{p_i\right\}_{i=1}^K$ lie in a simplex), we have $f(x) = \sum_i p_i f_i(x)$.
In condition that $f(\cdot)$ and $\left\{f_i(\cdot)\right\}_{i=1}^K$ have first and second moments, i.e., $\mu^{(1)}$ and $\mu^{(2)}$ for $f(x)$,
and $\left\{\mu_i^{(1)}\right\}_{i=1}^K$ and $\left\{\mu_i^{(2)}\right\}_{i=1}^K$ for components $\left\{f_i(x)\right\}_{i=1}^K$, we have:
\begin{align}
\sum_i p_i\cdot \mathrm{Var}(f_i) \leq \mathrm{Var}(f).
\end{align}
\end{proposition}
We use the fact that $\mu^{(k)} = \sum_i p_i \mu_i^{(k)}$. By using Jensen's Inequality
on $\sum_i p_i \left( \mu_i^{(1)} \right)^2 \geq
\left( \sum_i p_i \mu_i^{(1)} \right)^2$, we immediately yield the result. See detailed
proofs in supplementary.
This proposition states that, if we have limited data samples
(always the truth in reality) and in case we know the ground
truth data generative process a priori, i.e., the exact generative
process of each sample from its corresponding component, the variance on
expectation conditioned on the ground truth data assignment
should be no larger than the variance of the mixture. Based on the
assumption that microscopic data are independent, the variance of the
aggregation of clustered data should be at least no larger than the
aggregation of all microscopic data, i.e., the macroscopic data. So the
modeling of clustered data from separate components could possibly be more
accurate and robust compared with the modeling of macroscopic data.
This result motivates us to forecast macroscopic time series by clustering
the underlying microscopic time series. Essentially, we transform the original
macroscopic time series data to clusters with lower variances using a
clustering approach, then followed by any time series models to forecast
each clustered time series. After that, we sum over all the results from
those clusters so as to yield the forecasting of macroscopic time series.
We demonstrate this result with toy examples next.
\subsection{Demonstration with toy examples}
We demonstrate the effectiveness of forecasting macroscopic time series by
aggregating the forecasting results from clustered time series.
{\bfseries Simulation setting.}
We generate microscopic time series from a mixture model,
such as Gaussian process (GP)~\cite{roberts2013gaussian} or ARMA~\cite{box2015time} with $3$ or $5$ components.
We generate $5$ time series for each component, and yield $15$ or $25$ microscopic time
series in total. We sum all the time series as the macroscopic time series. We get
clustered time series by simply summing microscopic time series from the same component.
Our purpose is to compare the performance between forecasting results directly on macroscopic time
series (macro results) and sum of forecasting results of clustered time series (clustered results).
We set the length of time series as $360$, and use rolling window approach for training and validating
our results in the last $120$ time steps (i.e., at each time step, we train the model
using the time series before current time point, and validate using the following
$30$ values). We fit the data with either GP or ARMA depending on the generative model.
We describe the detailed simulation parameters of mixture models in supplementary.
{\bfseries Simulation results.}
Table~\ref{tab:toy} shows the results measured by symmetric mean absolute percentage error (SMAPE)\footnote{Details are in supplementary}. It is obvious that no matter time series generated by mixture of GP or mixure of ARMA, the clustered results are superior to macro results. In other words, if we knew the ground truth component of each microscopic time series, modeling on clustered data aggregated by time series from the same component would have better results compared with directly modeling the macroscopic time series.
\begin{table}
\caption{We run the experiments $5$ times, and show the average results (SMAPE) of macro results and clustered results with ground truth clusters. Lower is better.}
\label{tab:toy}
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{2}{c}{GP time series data} & \multicolumn{2}{c}{ARMA time series data} \\
\cmidrule(r){2-3} \cmidrule(r){4-5}
& 3 clusters & 5 clusters & 3 clusters & 5 clusters \\
\midrule
macro results & 0.0263 & 0.0242 & 0.5870 & 0.5940 \\
clustered results & {\bf 0.0210} & {\bf 0.0198} & {\bf 0.3590} & {\bf 0.3840} \\
\bottomrule
\end{tabular}
\end{table}
\section{MixSeq: a mixture model for time series}
\label{sec:mixseq}
Based on the analysis in Section~\ref{sec:theory}, we assume
microscopic time series follow a mixture distribution, and propose
a mixture model, MixSeq, to cluster microscopic time series.
\subsection{Our model}
Our model assumes that each of $m$ microscopic time series follows a mixture
probabilistic distribution. To forecast $x_{t_0+1:t_0+\tau}$ given $x_{1:t_0}$,
our approach is to first partition $\left\{x_{i,1:t_0}\right\}_{i=1}^m$ into
$K$ clusters via MixSeq. Since
the distribution is applicable to all microscopic time series, we ignore
the subscript $i$ and time interval $1:t_0$ for simplicity.
We study the following generative probability of $x \in \mathbb{R}^{t_0}$:
\begin{equation}\label{eq:mix-model}
p(x) = \sum_z p(x,z) = \sum_z p(z)p(x|z) = \sum_z p(z) \prod_{t=1}^{t_0}
p(x_t|x_{<t},z; \Phi_z),
\end{equation}
where $z \in \{1,2,...,K\}$ is the discrete latent variable,
$K$ is the number of components in the mixture model,
$p(z)$ is the prior of cluster indexed by $z$, and $p(x|z)$ is the probability of time series $x$ generated
by the corresponding component governed by parameter $\Phi_z$. Note that
we have $K$ parameters $\Theta = \{\Phi_1, \Phi_2, \dots, \Phi_K\}$ in the mixture model.
We use ConvTrans introduced in~\cite{li2019enhancing} as our backbone
to model the conditional $p(x_t|x_{<t},z; \Phi_z)$.
To instantiate, we model the conditional $p(x_t|x_{<t},z; \Phi_z)$
by first generating $H^{(0)}\sim \rho(Y_t)$ via causal convolution~\cite{li2019enhancing}.
We then generate $H^{(L)} \sim g(H^{(0)}) \in \mathbb{R}^{(t-1) \times d_k}$ given $x_{<t}$.
Finally we generate the representation for $x_{<t}$ as
$h_{t} = \nu(H^{(L)}) = \sigma(W_s \sum_{j=1}^{t-1} H_j^{(L)}) \in \mathbb{R}^{d_p}$,
where $W_s \in \mathbb{R}^{d_p\times d_k}$ and $\sigma$ as ReLU activation function. Afterwards, we
decode $h_{t}$ to form the specific distribution from an exponential family.
In particular, we use Gaussian distribution $p(x_t|x_{<t},z; \Phi_z) = \mathcal{N}(x_t;\mu_t, \sigma_t^2)$,
where the mean and variance can be generated by following transformations,
\begin{equation}
\begin{aligned}
\mu_t = w_{\mu}^T h_t + b_{\mu},\,\,\,\sigma_t^2 = \mathrm{log}(1+\mathrm{exp}(w_{\sigma}^T h_t + b_{\sigma})),
\end{aligned}
\end{equation}
where $w_{\mu},w_{\sigma} \in \mathbb{R}^{d_p}$ are parameters, and $b_{\mu},b_{\sigma} \in \mathbb{R}$ are biases.
\subsection{Posterior inference and learning algorithms}
We aim to learn the parameter $\Theta$ and efficiently
infer the posterior distribution of $p(z|x)$ in Eq.~\eqref{eq:mix-model}.
However, it is intractable to maximize the marginal likelihood $p(x)$ after
taking logarithm, i.e., $\log p(x)$, since of the involvement of logarithm of sum.
To tackle this non-convex problem, we resort to stochastic auto-encoding variational Bayesian
algorithm (AEVB)~\cite{kingma2013auto}.
Regarding single microscopic time series, the variational lower bound (LB)~\cite{kingma2013auto} on
the marginal likelihood is as below.
\begin{equation}\label{lb}
\begin{aligned}
\log p(x) &= \log \sum_z p(x,z) \ge \sum_z q(z|x) \log \frac{p(x,z)}{q(z|x)} \\
&=\mathbb{E}_{q(z|x)} \log p(x|z) - \mathrm{KL}\left(q(z|x)\|p(z)\right) = \mathrm{LB},
\end{aligned}
\end{equation}
where $q(z|x)$ is the approximated posterior of the latent variable $z$ given time series $x$.
The benefit of using AEVB is that we can treat $q(z|x)$ as an encoder modeled by a neural network.
Hence, we reuse the ConvTrans~\cite{li2019enhancing} as our backbone,
and model $q(z|x)$ as:
\begin{equation}
\begin{aligned}
q(z|x) = \mathrm{SOFTMAX}(W_a \cdot \nu(H^{(L)})),\,\,\,\, H^{(L)} = g(\rho(Y_{t_0})),
\end{aligned}
\end{equation}
where we denote $Y_{t_0} = [x_{1:t_0} \| o_{1:t_0}] \in \mathbb{R}^{t_0 \times (d+1)}$,
$\nu(H^{(L)}) = \sigma(W_s\cdot \sum_{j=1}^{t_0} H_j^{(L)})$
with parameter $W_s \in \mathbb{R}^{d_p\times d_k}$ is the deep set function,
and $W_a \in \mathbb{R}^{K\times d_p}$ as
parameters to project the encoding to $K$ dimension. After the softmax operator, we
derive the posterior distribution that lies in a simplex of $K$ dimension.
Note that we use distinct $\rho(\cdot)$'s, $g(\cdot)$'s and $\nu(\cdot)$'s with different parameters to model
$q(z|x)$ and $\{p(x_t|x_{<t}, z)\}_{z=1}^{K}$ respectively.
We assign each microscopic $x_i$ to cluster $z_i = \underset{z}{\arg\max}\ q(z|x_i)$ in our experiments.
{\bfseries Mode collapsing.} We find that directly optimizing the lower bound in
Eq. (\ref{lb}) suffers from the mode collapsing problem. That is, the encoder
$q(z|x)$ tends to assign all microscopic time series to one cluster, and does
not effectively distinguish the data as expected, thus implying $I(x;z)=0$ ($I(\cdot)$ for mutual information).
In order to address the above mode collapsing problem, we add $I(x;z)$ to the lower
bound in Eq. (\ref{lb}) which expects that the latent variable $z$ can extract discriminative
information from different time series \cite{zhao2018unsupervised}. Then, we have
\begin{equation}
\label{old-loss}
\begin{aligned}
&\mathbb{E}_x(\mathbb{E}_{q(z|x)}\log p(x|z))-\mathbb{E}_x(\mathrm{KL}(q(z|x)\|p(z)))+I(x;z)\\
& = \mathbb{E}_x(\mathbb{E}_{q(z|x)}\log p(x|z))-\mathrm{KL}(q(z)\|p(z)),
\end{aligned}
\end{equation}
where $q(z) = \frac{1}{m} \sum_{i=1}^{m} q(z|x_i)$ is an average of approximated posteriors over all microscopic data.
We approximate this term by using a mini-batch of $m'$ samples, i.e., $q(z) = \frac{1}{m'} \sum_{i' \in \mathcal{B}} q(z|x_{i'})$.
{\bfseries Annealing tricks.}
Regarding long-length time series, the reconstruction loss and KL divergence in Eq. (\ref{old-loss})
are out of proportion. In this situation, the KL divergence has few effects on the optimization
objective. So we finally derive the following objective to maximize:
\begin{equation}
\label{new-loss}
\mathbb{E}_x(\mathbb{E}_{q(z|x)}\log p(x|z))-\alpha\cdot \mathrm{KL}(q(z)\|p(z)) -\lambda\cdot \|\Theta \|
\end{equation}
where $\alpha$ is the trade-off hyperparameter. We use the following
annealing strategy $\alpha = \mathrm{max}(a, b \times e^{(- \beta n)})$ to
dynamically adjust $\alpha$ in the training process, where $\beta$ is the
parameter controlling the rate of descent. Meanwhile, we also involve the $\ell_2$-norm
regularizers on Seq2seq's parameters $\Theta$ with hyperparameter $\lambda\ge 0$.
\begin{table}
\caption{Mean and standard deviation (SD, in bracket) of Rand Index (RI, the higher the better) by clustering on synthetic data generated by ARMA and DeepAR. MixSeq-infer represents that we infer the cluster of new data generated by different models after training MixSeq. On ARMA data, MixSeq and MixARMA have comparable performance; on DeepAR data, MixARMA degrades
significantly which shows the effectiveness of MixSeq.}
\label{tab-syndatares}
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{2}{c}{ARMA synthetic data} & \multicolumn{2}{c}{DeepAR synthetic data} \\
\cmidrule(r){2-3} \cmidrule(r){4-5}
& 2 clusters & 3 clusters & 2 clusters & 3 clusters \\
\midrule
MixARMA & {\bf 0.9982}(0.0001) & 0.9509(0.1080) & 0.7995(0.2734) & 0.7687(0.0226) \\
MixSeq & 0.9915(0.0024) & {\bf 0.9540}(0.0974) & {\bf 0.9986}(0.0003) & {\bf 0.8460}(0.0774) \\
MixSeq-infer & 0.9929(0.0027) & 0.9544(0.0975) & 0.9982(0.0006) & 0.8460(0.0775) \\
\bottomrule
\end{tabular}
\end{table}
\section{Experimental results}\label{sec:exp}
We conduct extensive experiments to show the advantage of MixSeq.
We evaluate the clustering performance of MixSeq on synthetic data,
present the results of macroscopic time series forecasting on real-world data,
and analyze the sensitivity of the cluster number of MixSeq.
\subsection{Synthetic datasets}
\label{exp-part1}
To demonstrate MixSeq's capability of clustering microscopic time
series that follow various probabilistic mixture distributions, we conduct
clustering experiments on synthetic data with ground truth.
We generate two kinds of synthetic time series by ARMA~\cite{box2015time} and DeepAR~\cite{salinas2020deepar} respectively.
For each model, we experiment with different number of clusters (2 and 3)
generated with components governed by different parameters.
{\bfseries Experiment setting.}
To generate data from ARMA, we use ARMA(2, 0) and
$x_t = \phi_1 x_{t-1} + \phi_2 x_{t-2} + \epsilon_t$
with $\epsilon_t \sim N(0,0.27)$. We set parameters $[\phi_1, \phi_2]$ for three
components as $[-0.25, 0.52]$, $[0.34, 0.27]$, and $[1.5, -0.75]$
respectively. The synthetic time series from a mixture of $2$ components
are generated using the first two components. The synthetic time series
from a mixture of $3$ components are generated using all $3$ components.
To generate data from DeepAR, we use the DeepAR model with one LSTM layer,
and the hidden number of units is $16$. Since it is difficult to
randomly initialize the parameters of DeepAR, we train a base model on the
real-world Wiki dataset~\cite{tran2021radflow} (discussed in section~\ref{exp-part2}).
To build the other two DeepAR components, we respectively add random
disturbance $\mathcal{N}(0,0.01)$ to the parameters of the base model.
For each cluster, we generate $10,000$ time series with random initialized
sequences, and set the length of time series as $100$.
We use 1-layer causal convolution Transformer (ConvTrans \cite{li2019enhancing})
as our backbone model in MixSeq. We use the following parameters unless otherwise stated.
We set the number of multi-heads as $2$, kernel size as $3$, the number of kernel for causal convolution $d_k=16$, dropout rate as $0.1$, the penalty weight on the $\ell_2$-norm regularizer as 1e-5, and $d_p=d_v=16$. Meanwhile, we set the prior $p(z)$ as $1/K, \, \forall z$.
For the training parameters, we set the learning rate as 1e-4, batch size as
$256$ and epochs as $100$. Furthermore, the $\alpha$ in MixSeq is annealed
using the schedule $\alpha = \mathrm{max}(5, 20e^{(-0.03n)})$, where
$n$ denotes the current epoch, and $\alpha$ is updated in the $[10,30,50]$-th epochs.
For comparison, we employ MixARMA~\cite{xiong2004time}, a mixture of
ARMA(2, 0) model optimized by EM algorithm~\cite{blei2017variational},
as our baseline. Both methods are evaluated using Rand Index (RI)~\cite{rand1971objective}
(more details in supplementary).
{\bfseries Experiment results.}
We show the clustering performance of MixSeq and MixARMA on the synthetic data
in Table~\ref{tab-syndatares}. The results are given by the average of 5
trials. Regarding the synthetic data from ARMA, both MixSeq and MixARMA perform very
well. However, for the synthetic data from DeepAR, MixARMA degrades significantly
while MixSeq achieves much better performance. This suggests that MixSeq can
capture the complex nonlinear characteristics of time series generated by
DeepAR when MixARMA fails to do so. Furthermore, we also generate new time
series by the corresponding ARMA and DeepAR models, and infer their clusters
with the trained MixSeq model. The performance is comparable with the
training performance, which demonstrates that MixSeq actually captures the
generation mode of time series.
\subsection{Real-world datasets}
\label{exp-part2}
We further evaluate the effectiveness of our model on the macroscopic time series
forecasting task. We compare MixSeq with existing clustering methods and
state-of-the-art time series forecasting approaches on several real-world
datasets. Specifically, for each dataset, the goal is to forecast the
macroscopic time series aggregated by all microscopic data. We cluster
microscopic time series into groups, and aggregate the time series in each
group to form the clustered time series. Then, we train the forecasting models
on the clustered time series separately, and give predictions of each clustered
time series. Finally, the estimation of macroscopic time series is obtained by
aggregating all the predictions of clustered time series.
We report results on three real-world datasets, including
Rossmann\footnote{https://www.kaggle.com/c/rossmann-store-sales},
M5\footnote{https://www.kaggle.com/c/m5-forecasting-accuracy} and
Wiki~\cite{tran2021radflow}. The Rossmann dataset consists of historical
sales data of $1,115$ Rossmann stores recorded every day.
Similarly, the M5 dataset consists of $30,490$ microscopic time series as
the daily sales of different products in ten Walmart stores in USA.
The Wiki dataset contains $309,765$ microscopic time series
representing the number of daily views of different Wikipedia articles.
The dataset summary is shown in Table \ref{tab-realdata}, together with
the setting of data splits.
\begin{table}
\caption{Real-world dataset summary.}
\label{tab-realdata}
\centering
\begin{tabular}{lllll}
\toprule
dataset & \makecell[l]{\# microscopic\\time series} & \makecell[l]{length of\\time series} & train interval & test internal \\
\midrule
Rossmann & 1115 & 942 & 20130101-20141231 & 20150101-20150731 \\
M5 & 30490 & 1941 & 20110129-20160101 & 20160101-20160619 \\
Wiki & 309765 & 1827 & 20150701-20191231 & 20200101-20200630 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Comparisons on the microscopic time series clustering methods for macroscopic time series forecasting combined with three network-based forecasting methods: testing $R_{0.5}$/$R_{0.9}$-loss on three real-world datasets. Lower is better.}
\label{tab-realdatarloss}
\centering
\begin{tabular}{llllll}
\toprule
& & Macro & DTCR & MixARMA & MixSeq \\
\midrule
& DeepAR & 0.1904/{\bf 0.0869} & 0.2292/0.1432 & 0.1981/0.1300 & {\bf 0.1857}/0.0987 \\
Rossmann & TCN & 0.1866/0.1005 & 0.2023/0.1633 & 0.1861/0.1160 & {\bf 0.1728/0.0997} \\
& ConvTrans & 0.1861/0.0822 & 0.2077/0.0930 & 0.1866/0.0854 & {\bf 0.1847/0.0813} \\
\midrule
& DeepAR & {\bf 0.0548/0.0289} & 0.0787/0.0627 & 0.0624/0.0582 & 0.0582/0.0445 \\
M5 & TCN & 0.0790/0.0635 & 0.0847/0.0805 & 0.0762/0.0789 & {\bf 0.0694/0.0508} \\
& ConvTrans & 0.0553/0.0260 & 0.0514/0.0260 & 0.0497/0.0257 & {\bf 0.0460/0.0238} \\
\midrule
& DeepAR & 0.0958/0.0962 & 0.1073/0.1336 & 0.0974/0.1070 & {\bf 0.0939/0.0901} \\
Wiki & TCN & 0.0966/0.1064 & 0.1237/0.1480 & 0.0963/0.1218 & {\bf 0.0886/0.0980} \\
& ConvTrans & 0.0968/0.0589 & 0.1029/0.0531 & 0.0961/0.0594 & {\bf 0.0901/0.0516} \\
\bottomrule
\end{tabular}
\end{table}
{\bfseries Experiment setting.}
We summarize the clustering strategies for macroscopic time series
forecasting as follows. \textbf{(1)} ``DTCR''~\cite{ma2019learning}
is the deep temporal clustering representation method which integrates
the temporal reconstruction, K-means objective and auxiliary classification
task into a single Seq2seq model. \textbf{(2)} ``MixARMA''~\cite{xiong2004time}
is the mixture of ARMA model that uses ARMA to capture the characteristics
of microscopic time series. \textbf{(3)} ``MixSeq'' is our model with 1-layer
causal convolution Transformer~\cite{li2019enhancing}. \textbf{(4)} We also
report the results that we directly build forecasting model on the
macroscopic time series without leveraging the microscopic data,
named as ``Macro''.
For time series forecasting, we implement five methods combined with
each clustering strategy, including ARMA~\cite{box2015time},
Prophet~\cite{taylor2018forecasting}, DeepAR~\cite{salinas2020deepar},
TCN~\cite{bai2018empirical}, and ConvTrans~\cite{li2019enhancing}.
ARMA and Prophet give the prediction of point-wise value for time series,
while DeepAR, TCN and ConvTrans are methods based on neural network for probabilistic
forecasting with Gaussian distribution. We use the rolling window strategy on
the test interval, and compare different methods in terms of the long-term
forecasting performance for $30$ days. The data of last two months in train interval
are used as validation data to find the optimal model.
We do grid search for the following hyperparameters in clustering and forecasting
algorithms, i.e., the number of clusters $\{3,5,7\}$, the learning rate
$\{0.001, 0.0001\}$, the penalty weight on the $\ell_2$-norm regularizers
$\{1e-5,5e-5\}$, and the dropout rate $\{0,0.1\}$. The model with best
validation performance is applied for obtaining the results on test interval.
Meanwhile, we set batch size as $128$, and the number of training epochs as $300$
for Rossmann, $50$ for M5 and $20$ for Wiki. For DTCR, we use the same setting as~\cite{ma2019learning}.
Regarding time series forecasting models, we apply the default setting to ARMA and
Prophet provided by the Python packages. The architectures of DeepAR, TCN and ConvTrans
are as follows. The number of layers and hidden units are $1$ and $16$ for
DeepAR. The number of multi-heads and kernel size are $2$ and $3$ for
ConvTrans. The kernel size is $3$ for TCN with dilations in $[1,2,4,8]$.
We also set batch size as $128$ and the number of epochs as $500$ for all
forecasting methods.
{\bfseries Experiment results.}
Following \cite{li2019enhancing, rangapuram2018deep, tran2021radflow}, we
evaluate the experimental methods using SMAPE and $\rho$-quantile loss
$R_{\rho}$\footnote{Detailed definition is in supplementary.} with $\rho \in (0,1)$. The SMAPE results of all combination
of clustering and forecasting methods are given in Table~\ref{tab-realdatasmape}.
Table \ref{tab-realdatarloss} shows the $R_{0.5}/R_{0.9}$-loss for DeepAR,
TCN and ConvTrans which give probabilistic forecasts. All results are
run in $5$ trials. The best performance is highlighted by bold character.
We observe that MixSeq is superior to other three methods, suggesting that
clustering microscopic time series by our model is able to improve the
estimation of macroscopic time series. Meanwhile, Macro and MixARMA have comparable performance and are better than DTCR, which further demonstrates the effectiveness
of our method, i.e., only proper clustering methods are conductive to
macroscopic time series forecasting.
\begin{table}
\caption{Comparisons on the microscopic time series clustering methods for macroscopic time series forecasting combined with five forecasting methods: testing SMAPE on three real-world datasets.}
\label{tab-realdatasmape}
\centering
\begin{tabular}{llllll}
\toprule
& & Macro & DTCR & MixARMA & MixSeq \\
\midrule
& ARMA & 0.2739(0.0002) & 0.2735(0.0106) & 0.2736(0.0013) & {\bf 0.2733}(0.0012) \\
& Prophet & 0.1904(0.0007) & {\bf 0.1738}(0.0137) & 0.1743(0.0037) & 0.1743(0.0026) \\
Rossmann & DeepAR & 0.1026(0.0081) & 0.1626(0.0117) & 0.1143(0.0088) & {\bf 0.0975}(0.0013) \\
& TCN & 0.1085(0.0155) & 0.1353(0.0254) & 0.1427(0.0180) & {\bf 0.1027}(0.0075) \\
& ConvTrans & 0.1028(0.0091) & 0.1731(0.0225) & 0.1022(0.0041) & {\bf 0.0961}(0.0019) \\
\midrule
& ARMA & {\bf 0.0540}(0.0001) & 0.0544(0.0018) & 0.0541(0.0003) & 0.0543(0.0001) \\
& Prophet & 0.0271(0.0003) & 0.0271(0.0003) & 0.0269(0.0002) & {\bf 0.0267}(0.0002) \\
M5 & DeepAR & {\bf 0.0278}(0.0034) & 0.0410(0.0046) & 0.0319(0.0063) & 0.0298(0.0029) \\
& TCN & 0.0412(0.0075) & 0.0447(0.0044) & 0.0395(0.0094) & {\bf 0.0358}(0.0014) \\
& ConvTrans & 0.0274(0.0048) & 0.0253(0.0020) & 0.0245(0.0024) & {\bf 0.0227}(0.0006) \\
\midrule
& ARMA & {\bf 0.0362}(0.0001) & 0.0363(0.0006) & 0.0364(0.0005) & {\bf 0.0362}(0.0002) \\
& Prophet & {\bf 0.0413}(0.0001) & 0.0423(0.0008) & 0.0434(0.0003) & 0.0420(0.0005) \\
Wiki & DeepAR & 0.0481(0.0008) & 0.0552(0.0015) & 0.0489(0.0006) & {\bf 0.0470}(0.0002) \\
& TCN & 0.0494(0.0076) & 0.0654(0.0022) & 0.0491(0.0015) & {\bf 0.0446}(0.0023) \\
& ConvTrans & 0.0471(0.0029) & 0.0497(0.0012) & 0.0466(0.0001) & {\bf 0.0440}(0.0010) \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Sensitivity analysis of cluster number}
\label{exp-part3}
\begin{figure}
\caption{The macroscopic time series forecasting performance based on MixSeq with different cluster number $K$ on three real-world datasets. The time series forecasting method is fixed as causal convolution Transformer. Top three figures show the $R_{0.5}
\label{fig:diffk}
\end{figure}
The cluster number $K$ is a critical hyperparameter of MixSeq. To analyze
its effect on the forecasting of macroscopic time series, we conduct
experiments on both synthetic data and real-world data.
The results state the importance of setting a proper number of
clusters. We suggest do binary search on this critical hyperparameter.
Details are as follows.
{\bfseries Synthetic data.}
Following the experimental setting in section~\ref{exp-part1}, we generate $10,000$
microscopic time series from $3$ different parameterized ARMA respectively. That is, the
ground truth number of clusters is $3$ and there are $30,000$ microscopic samples in
total. The aggregation of all samples is the macroscopic time series which is the
forecasting target of interest. Then, we compare the forecasting performance between
the method that directly forecasts macroscopic time series (denoted as Macro) and our
method with different cluster numbers (including $2$, $3$, $5$, denoted as MixSeq\_2,
MixSeq\_3 and MixSeq\_5 respectively). We fix the forecasting method as ARMA, and apply
the rolling window approach for T+10 forecasting in the last $40$ time steps.
The average SMAPE of $5$ trials are $0.807$, $0.774$, $0.731$ and $0.756$ for Macro, MixSeq\_2, MixSeq\_3 and MixSeq\_5
respectively. MixSeq\_3 being set with ground truth cluster number shows the best
performance, while MixSeq\_2 and MixSeq\_5 would degenerate though still better than
Macro. This result shows the importance of setting a proper number of clusters.
{\bfseries Real-world data.}
We do the evaluation on three real-world datasets by varying the cluster number $K$ of
MixSeq while maintaining the other parameters fixed. For Rossmann and M5 datasets, we set
the cluster number $K \in \{3,5,7\}$, while we explore the cluster number
$K \in \{5,10,15\}$ on Wiki dataset. The architecture and training parameters of MixSeq are same
to section~\ref{exp-part2}, except that we set the dropout rate as $0.1$, the penalty weight on the
$\ell_2$-norm regularizer as 5e-5, and the learning rate as 1e-4. Meanwhile,
we also fix the time series forecasting method as causal convolution Transformer (ConvTrans).
Figure~\ref{fig:diffk} reports the macroscopic time series forecasting performance (testing
on $R_{0.5}$ and $R_{0.9}$ loss) based on MixSeq with different cluster number $K$
on three real-world datasets. The horizontal dashed lines are the results with $K=1$
that directly building ConvTrans model on the macroscopic time series without
leveraging the microscopic data (named as ``Macro'' in section~\ref{exp-part2}). It is obvious that
each dataset has its own suitable number of clusters, and our method is relatively sensitive to
$K$, especially on the dataset with less microscopic time series, such as Rossmann.
Similar to the accurately modeling of each microscopic time series, the larger cluster number
$K$ of MixSeq also brings large variance to macroscopic time series forecasting, which
degrades the performance of our method.
\section{Conclusion}
In this paper, we study the problem that whether macroscopic time series forecasting
can be improved by leveraging microscopic time series. Under mild assumption of
mixture models, we show that appropriately clustering microscopic time series
into groups is conductive to the forecasting of macroscopic time series. We
propose MixSeq to cluster microscopic time series, where all the
components come from a family of Seq2seq models parameterized with different
parameters. We also propose an efficient stochastic auto-encoding variational
Bayesian algorithm for the posterior inference and learning for MixSeq.
Our experiments on both synthetic and real-world data suggest that MixSeq
can capture the characteristics of time series in different groups and
improve the forecasting performance of macroscopic time series.
\end{ack}
\appendix
\setcounter{proposition}{0}
\section{Proofs}
\begin{proposition}
Assuming the mixture model with probability density function $f(x)$, and corrsponding components $\left\{f_i(x)\right\}_{i=1}^K$ with constants $\left\{p_i\right\}_{i=1}^K$ ($\left\{p_i\right\}_{i=1}^K$ lie in a simplex), we have $f(x) = \sum_i p_i f_i(x)$.
In condition that $f(\cdot)$ and $\left\{f_i(\cdot)\right\}_{i=1}^K$ have first and second moments, i.e., $\mu^{(1)}$ and $\mu^{(2)}$ for $f(x)$,
and $\left\{\mu_i^{(1)}\right\}_{i=1}^K$ and $\left\{\mu_i^{(2)}\right\}_{i=1}^K$ for components $\left\{f_i(x)\right\}_{i=1}^K$, we have:
\begin{align}
\sum_i p_i\cdot \mathrm{Var}(f_i) \leq \mathrm{Var}(f).
\end{align}
\end{proposition}
\begin{proof}
We prove the result based on the fact that we have for any moment $k$ that
\begin{equation}
\begin{aligned}
\mu^{(k)} = \mathbb{E}_f\left[x^k\right] = \sum_i p_i \mathbb{E}_{f_i}\left[x^k\right] = \sum_i p_i \mu_i^{(k)}.
\end{aligned}
\end{equation}
We then derive the variance of mixture as
\begin{equation}
\begin{aligned}
\mathrm{Var}(f) &= \sum_i p_i \mu_i^{(2)} - \left( \sum_i p_i \mu_i^{(1)} \right)^2
= \sum_i p_i\left(\mathrm{Var}(f_i) + \left( \mu_i^{(1)} \right)^2 \right) - \left( \sum_i p_i \mu_i^{(1)} \right)^2\\
&= \sum_i p_i \mathrm{Var}(f_i) + \sum_i p_i \left( \mu_i^{(1)} \right)^2 - \left( \sum_i p_i \mu_i^{(1)} \right)^2.
\end{aligned}
\end{equation}
Since the squared function is convex, by Jensen's Inequality we immediately have
$\sum_i p_i \left( \mu_i^{(1)} \right)^2 \geq \left( \sum_i p_i \mu_i^{(1)} \right)^2$.
\end{proof}
\section{Complexity analysis and running time}
{\bfseries Complexity analysis of MixSeq.}
The complexity of MixSeq depends on the number of clusters $K$ and three network architectures,
including convolution, multi-head self-attention and MLP. For time series
$x \in \mathbb{R}^{t\times d}$, where $t$ and $d$ are the length and dimension of
time series respectively, the FLOPs (floating point operations) of convolution is $O(w_k\cdot d\cdot t\cdot d_k)$,
where $w_k$ and $d_k$ are the size and number of convolution kernel. The FLOPs of multi-head
self-attention is $O(h\cdot t^2\cdot d_k)$, where $h$ is the number of multi-heads. The FLOPs of MLP is
$O(h\cdot t\cdot d_k^2)$. Finally, the FLOPs of MixSeq is $O(K(w_k\cdot d\cdot t\cdot d_k+h\cdot t^2\cdot d_k+h\cdot t\cdot d_k^2))$.
Since $w_k$, $d_k$ and $d$ are usually smaller than $t$, so the time complexity can be
simplified as $O(K\cdot h\cdot t^2\cdot d_k)$ which is similar to Transformer.
Time series data is always recorded by day or hour. There are only one thousand values even for
the data recorded in three years, so our method is capable for dealing with them.
Furthermore, some existing methods can also be used in MixSeq to accelerate the
computation of self-attention.
{\bfseries Running time of macroscopic time series forecasting.}
The overall running time of forecasting macroscopic data based on MixSeq is comprised of two steps.
\textbf{(1)} The first step is to do clustering with MixSeq. Figure~\ref{fig:conv} shows
the convergence of MixSeq over time (seconds) compared with comparison approaches to
time series clustering. Our approach takes $200$ seconds to convergence on the dataset
containing $1,115$ microscopic time series, while MixARMA and DTCR take $20$ and $200$ seconds
to convergence respectively. The convergence rate of our method is not worse than existing neural network based approach, i.e., DTCR. \textbf{(2)} The second step is to forecast clustered time series with any proper forecasting model. The time complexity is in linear w.r.t the number of clustered time series. We can always accelerate this step by using more workers in parallel.
\begin{figure}
\caption{The convergence of different clustering methods over time (seconds) on Rossmann dataset.
The optimization objective of MixARMA is maximized by EM algorithm,
and the optimization objective of MixSeq and DTCR are minimized by gradient descent with Adam.}
\label{fig:conv}
\end{figure}
\section{Evaluation metrics}
\subsection{Rand index}
Given the labels as a clustering ground truth, Rand index (RI) measures the clustering accuracy between ground truth and predicted clusters, defined as
\[ \mathrm{RI} = \frac{a+b}{C_m^2}, \]
where $m$ is the total number of samples, $a$ is the number of sample pairs that are in the same cluster with same label, and $b$ is the number of sample pairs that in different clusters with different labels.
\subsection{Symmetric mean absolute percentage error}
Symmetric mean absolute percentage error (SMAPE) is an accuracy measure for time series forecasting based on percentage (or relative) errors. $ \mathrm{SMAPE} \in [0,1]$ is defined as
\begin{equation*}
\mathrm{SMAPE} = \frac{1}{n} \sum_{t=1}^{n} \frac{\left|x_t - \hat{x}_t\right|}{\left|x_t\right|+\left|\hat{x}_t\right|},
\end{equation*}
where $x_t$ is the actual value and $\hat{x}_t$ is the predicted value for time $1 \le t \le n$, and $n$ is the horizon of time series forecasting.
\subsection{$\rho$-quantile loss}
In the experiments, we evaluate different methods by the rolling window
strategy. The target value of macroscopic time series for each dataset is
given as $x_{i,t}$, where $x_i$ is the $i$-th testing sample of macroscopic
time series and $t\in [0,30)$ is the lead time after the forecast start
point. For a given quantile $\rho \in (0,1)$, we denote the predicted
$\rho$-quantile for $x_{i,t}$ as $\hat{x}_{i,t}^{\rho}$. To obtain such a
quantile prediction from the estimation of clustered time series, a set of
predicted samples of each clustered time series is first sampled. Then
each realization is summed and the samples of these sums represent
the estimated distribution for $x_{i,t}$. Finally, we can take the
$\rho$-quantile from the empirical distribution.
The $\rho$-quantile loss is then defined as
\begin{equation*}
R_{\rho}(\mathbf{x}, \hat{\mathbf{x}}^{\rho}) = \frac{2 \sum_{i,t} D_{\rho} (x_{i,t}, \hat{x}_{i,t}^{\rho})}
{\sum_{i,t} \left| x_{i,t} \right|}, \qquad
D_{\rho} (x, \hat{x}^{\rho}) = (\rho - \mathbf{I}_{\{x \le \hat{x}^{\rho}\}})(x - \hat{x}^{\rho})
\end{equation*}
where $\mathbf{I}_{\{x \le \hat{x}^{\rho}\}}$ is an indicator function.
\section{Experiments}
\subsection{Enviroment setting}
We conduct the experiments on an internal cluster with $8$-core CPU, $32$G RAM and $1$ P100 GPU.
Meanwhile, MixSeq, together with the time series forecasting methods based on neural network,
are implemented with tensorflow 1.14.0.
\subsection{Simulation parameters of toy examples}
We use the TimeSynth\footnote{https://github.com/TimeSynth/TimeSynth} python package to
generate simulation time series data. For GP time series, we use RBF as the kernel function.
The lengthscale and variance are $[1.5,2]$, $[0.5,2.5]$ and $[0.5,1]$ for the mixture model
of 3 GPs. Then, we add time series generated by GP with $[0.5,0.5]$ and $[2,1]$ for the
samples from mixture of 5 GPs. Similarly, we use ARMA(2, 0) to generate ARMA time series.
The parameters of the first three components of the mixture of ARMA are $[1.5,-0.75]$, $[1,-0.9]$
and $[-0.25,0.52]$ respectively. The parameters of another two components are $[0.34,0.27]$
and $[1,-0.30]$. The initial values of ARMA are sampled from $N(0,0.25)$.
\section{Societal impacts}
We study the problem that whether macroscopic time series forecasting
can be improved by leveraging microscopic time series, and finally propose MixSeq
to cluster microscopic time series to improve the forecasting of macroscopic time series.
This work will be especially useful for financial institutions and e-commercial platforms,
e.g., loan forecasting, balance forecasting, and Gross Merchandise Volume (GMV) forecasting.
The forecasting can help business decisions like controlling the risk of each financial institution,
and help the lending to merchant in an e-commerce platform.
Misuse of microscopic data could possibly lead to privacy issues.
As such, protecting microscopic data with privacy-preserving techniques should be important.
\end{document}
|
math
|
تِکیٛازِ تٔمۍ سُنٛد پرٛؠتھ کانٛہہ خانہٕ دار چُھ تَس سٟتۍ روزنہٕ چہِ گۄڈنچے رٲژ مران
|
kashmiri
|
"use strict";
/* global global: false */
var console = require("console");
var ko = require("knockout");
var $ = require("jquery");
var lsCommandPluginFactory = function(md, emailProcessorBackend) {
var commandsPlugin = function(md, viewModel) {
// console.log("loading from metadata", md, model);
var saveCmd = {
name: 'Save', // l10n happens in the template
enabled: ko.observable(true)
};
saveCmd.execute = function() {
saveCmd.enabled(false);
viewModel.metadata.changed = Date.now();
if (typeof viewModel.metadata.key == 'undefined') {
console.warn("Unable to find ket in metadata object...", viewModel.metadata);
viewModel.metadata.key = md.key;
}
global.localStorage.setItem("metadata-" + md.key, viewModel.exportMetadata());
global.localStorage.setItem("template-" + md.key, viewModel.exportJSON());
saveCmd.enabled(true);
};
var testCmd = {
name: 'Test', // l10n happens in the template
enabled: ko.observable(true)
};
var downloadCmd = {
name: 'Download', // l10n happens in the template
enabled: ko.observable(true)
};
testCmd.execute = function() {
testCmd.enabled(false);
var email = global.localStorage.getItem("testemail");
if (email === null || email == 'null') email = viewModel.t('Insert here the recipient email address');
email = global.prompt(viewModel.t("Test email address"), email);
if (email.match(/@/)) {
global.localStorage.setItem("testemail", email);
console.log("TODO testing...", email);
var postUrl = emailProcessorBackend ? emailProcessorBackend : '/dl/';
var post = $.post(postUrl, {
action: 'email',
rcpt: email,
subject: "[test] " + md.key + " - " + md.name,
html: viewModel.exportHTML()
}, null, 'html');
post.fail(function() {
console.log("fail", arguments);
viewModel.notifier.error(viewModel.t('Unexpected error talking to server: contact us!'));
});
post.success(function() {
console.log("success", arguments);
viewModel.notifier.success(viewModel.t("Test email sent..."));
});
post.always(function() {
testCmd.enabled(true);
});
} else {
global.alert(viewModel.t('Invalid email address'));
testCmd.enabled(true);
}
};
downloadCmd.execute = function() {
downloadCmd.enabled(false);
viewModel.notifier.info(viewModel.t("Downloading..."));
viewModel.exportHTMLtoTextarea('#downloadHtmlTextarea');
var postUrl = emailProcessorBackend ? emailProcessorBackend : '/dl/';
global.document.getElementById('downloadForm').setAttribute("action", postUrl);
global.document.getElementById('downloadForm').submit();
downloadCmd.enabled(true);
};
viewModel.save = saveCmd;
viewModel.test = testCmd;
viewModel.download = downloadCmd;
}.bind(undefined, md);
return commandsPlugin;
};
module.exports = lsCommandPluginFactory;
|
code
|
Digital media changes the way in which people access information and connect to each other.
Key Concepts: Function, Causation and Responsibility Related Related Concepts: ethics, networks, access, platform and digital citizenship.
To know the strength and weaknesses of the chosen digital media.
In order to understand the ways to share information through the chosen digital media.
You are a netizen. You are going to share information through digital media.
WALT: Uses different digital media to promote a cause wisely.
|
english
|
When you need a domain name which can be instantly remembered by potential customers, and if you're tired of websites with generic .COM and .NET extensions, a .FAMILY domain name is probably the optimal choice for you. It was launched lately and there are still tons of attractive domain names available, with the name you wish to register most probably one of them, as well. What's more, you could use the .FAMILY extension to directly share the nature of your site to visitors and clients.
With Virtual Prime Location you could purchase a .FAMILY domain name for only $21.00 a year.
Make use of Virtual Prime Location's Domain Manager, which allows you to control your .FAMILY domain names from just one single location! Every time you use it, editing your WHOIS info or DNS controls of several domains at the same time is going to be a piece of cake. This outstanding domain name management instrument can be found in our custom-built Web Hosting Control Panel.
And, to top it off, you can conveniently manage your domains as well as your websites without actually having to leave the Control Panel - when you own a hosting account with Virtual Prime Location, that is.
|
english
|
होम कारीर गाइडांस म.स्क.इट कोर्स में एडमिशन, करियर, स्कोप, नौकरियां और सैलरी की पूरी जानकारी
म.स्क.इट कोर्स में एडमिशन, करियर, स्कोप, नौकरियां और सैलरी की पूरी जानकारी
बाय कविता शर्मा - अप्रैल १३, २०१७ ४१ ० ग्रेजुएशन के बाद यदि आप म.स्क.इट कोर्स करना चाहते हैं तो, कोर्स से सम्बन्धित यहाँ आपको पूरी जानकारी दी गई है|
यह २ साल का स्नातकोत्तर डिग्री कोर्स है। इन २ वर्षों के दौरान, छात्रों को आईटी (सूचना प्रौद्योगिकी) के क्षेत्र में सैद्धांतिक और व्यावहारिक ज्ञान दोनों दिए जाते हैं। बीएससी के मामले में आईटी, सेमेस्टर प्रणाली भी इस मामले का पालन करती है| २ साल ४ सेमेस्टर में विभाजित हैं |
म.स्क.इट कोर्स में एडमिशन: म.स्क.इट कोर्स में एडमिशन लेने के लिए निम्न प्रक्रिया नीचे दी गई है:
योग्यता: चूंकि यह पीजी कोर्स है, इसे आगे बढ़ाने के लिए, छात्रों को किसी मान्यताप्राप्त विश्वविद्यालय / संस्थान से स्नातक की डिग्री (न्यूनतम ३ वर्ष का कोर्स) होना चाहिए। न्यूनतम अंक मापदंड भी मौजूद हैं, लेकिन यह सभी विश्वविद्यालय / संस्थान में भिन्न होता है|
जो लोग एमएससी आईटी में प्रवेश चाहते हैं उनके पास में स्नातक की डिग्री भी होनी चाहिए। कुछ उदाहरण- बी.एससी आईटी डिग्री, बीई / बी टेक आईटी डिग्री, बीकॉम इन कम्प्यूटर साइंस या आईटी आदि।
एमएससी करने के बाद एक छात्र की योग्यता बढ़ जाती है इसके अलावा, समूह चयन नियमानुसार यह सुनिश्चित करता है कि छात्रों को एक विशेष अनुशासन में विशेषज्ञ। यह विशेषज्ञता उस संबंधित क्षेत्र में बेहतर नौकरी खोजने में मदद करेगी। इसके अलावा, एम.एससी। डिग्री पीएचडी जैसे कोर्स को आगे बढ़ाने के लिए छात्रों के लिए रास्ता तैयार करता है और इस प्रकार आकर्षक अनुसंधान एवं विकास क्षेत्र में कैरियर का निर्माण होता है।
भारत में म.स्क.इट कोर्स के लिए टॉप कॉलेज/यूनिवर्सिटी:
म.स्क.इट कोर्स के बाद स्कोप/ करियर और नौकरियां:
एमएससी कोर्स स्वाभाविक रूप से स्नातक के स्किल को बढ़ाता है और इस प्रकार उनके लिए अवसरों के नए द्वार खुलते है। कोर्स के बाद, एक निजी और साथ ही सरकारी क्षेत्र में नौकरी मिल सकती है। टीचिंग भी एक अच्छा जॉब है जो एक बीएससी के संयोजन के बाद भी ले सकता है।
म.स्क.इट कोर्स के बाद कुछ रोजगार क्षेत्र:
म.स्क.इट कोर्स के बाद अनुमानित फीस: महाविद्यालय के मानदंडों के आधार पर इस कोर्स की औसत फीस ८०,००० / से ३ लाख रूपये होगी । अधिकांश कॉलेजों में प्रवेश स्नातक की डिग्री के आधार पर योग्यता सूची के माध्यम से किया जाता है, लेकिन कुछ कॉलेज हैं जहाँ प्रवेश परीक्षा के आधार पर प्रवेश किया जाता है।
म.स्क.इट कोर्स के बाद अनुमानित सैलरी:
एम.एससी कोर्स के बाद औसत प्रारंभिक सैलरी प्रति माह लगभग २०,०००-३५,००० रुपये हो सकती है |
|
hindi
|
Drawings and captions reflecting life lessons on ways to stop killing your soul. Reminders, guides, stepping stones, examples relating to things so many of us who have faced dysfunctional upbringings, unaddressed or unhealed traumas and injuries, painful relationships deal with in the form of self-loathing, insecurity, fear, toxic relationships, and feeling trapped or stuck. From a series by artist Elwing Suong Gonzalez (@elwingbling).
|
english
|
\begin{document}
\begin{abstract}
For every positive integer $k$ such that $k>1,$ there are an infinity of odd integers $h$
with $\omega(h) =k$ distinct prime divisors such that there
do not exist a Circulant Hadamard matrix $H$ of order $n=4h^2.$
Moreover, our main result implies that for
all of the odd numbers $h$, with $1< h < 10^{13}$ there is no Circulant Hadamard matrix of order $n=4h^2.$
\end{abstract}
\maketitle
\section{Introduction}
A complex matrix $H$ of order $n$ is \emph{complex Hadamard} if $HH^{*} = nI$, where $I_n$ is the identity matrix of order $n$,
and if every entry of $H/\sqrt{n}$
is in the complex unit circle. Here, the $*$ means transpose and conjugate.
When such $H$ has real entries, so that $H$ is a $\{-1,1\}-$matrix, $H$ is called \emph{Hadamard}. If $H$ is Hadamard
and circulant, say $H=circ(h_1,\ldots,h_n)$, that means that the $i$-th row $H_i$ of $H$ is given by
$H_i = [h_{1-i+1},\ldots, h_{n-i+1}]$, the subscripts being taken modulo $n,$ for example
$H_2 =[h_n,h_1,h_2, \ldots,h_{n-1}].$ A long standing conjecture of Ryser (see \cite[pp. 134]{ryser})
is:
\begin{conjecture}
\label{mainryser}
Let $n \geq 4.$ If $H$ is a circulant Hadamard matrix of order $n$, then $n=4.$
\end{conjecture}
Details about previous results on the conjecture and a short sample of recent related papers are in
\cite{turyn1}, \cite{Leung}, \cite{BorMoss1} \cite{BorMoss2}, \cite{Craigen},
\cite{brualdi}, \cite{EGR} and the bibliography therein, \cite{lhg}.
The object of the present paper is to substantially extend the range of known $n$'s for which Ryser's Conjecture holds.
Our main result is:
\begin{theorem}
\label{mainar}
For every positive integer $k$ such that $k>1,$ there are an infinity of odd integers $h$
with $\omega(h) =k$ distinct prime divisors such that there
do not exist a circulant Hadamard matrix $H$ of order $n=4h^2.$
\end{theorem}
Our result is a simple consequence (see Lemma \ref{maincomp}) of a deep result of Arasu
(see \cite[Theorem 4, part (i)]{Arasu} and Lemma \ref{arasumain} below).
By using the Hadamard-Barker 's data in the web site of M. Mossinghoff (see \cite{webmoss}) and our
main key Lemma \ref{maincomp} below, we are also able to prove (by computer computations) the following new
result.
\begin{proposition}
\label{computA}
Let $S$ be the set of all integers $n=4h^2$ with odd $h$
such that $1<h < 10^{13}.$
For all $s \in S$ there is no circulant Hadamard matrices of order $n.$
\end{proposition}
These results implies corresponding results for the existence of Barker sequences (see section \ref{barkers}).
Of course, by using Lemma \ref{maincomp} combined with results obtained by other methods it is possible
to improve our numerical results herein. For example, by applying the lemma to already known $h$'s satisfying $h < 10^{24}$
(see some of them in the web site already cited) and for which all other methods have failed, etc. Two more examples: (a)
In about $6$ seconds computation in an old computer our Lemma \ref{maincomp} eliminated the only possible obstruction known for $h$, namely
$h = 31540455528264605$ in order that a Barker sequence exist with $13 < 4h^2 < 10^{33}$, (see \cite[Theorem 1]{BorMoss2}).
(b) In about $3$ seconds the first $6$ values of $h$ in between $10^{16.5}$ and $5\cdot 10^{24}$
$$
[66687671978077825,866939735715011725,1293740836374709805,
$$
$$6468704181873549025,
16818630872871227465,
84093154364356137325];
$$
(over $18$) in \cite[Table $2$]{BorMoss2} were also eliminated
as before. However, it is easy to see that there are values of $h$ that satisfy all the assumptions of Lemma \ref{maincomp},
besides the assumption on the possible existence of $H$.
Indeed some experiments with small values of $h$, say $h \leq 10000,$ suggest that, at least for these values,
there exist about $5/100$ of $h$'s for which
all the orders appearing in the lemma are odd.
\section{Some tools}
First of all we recall the notion of a weighing matrix.
\begin{definition}
\label{weig}
Let $n$ be a positive integer. Let $k$ be a positive integer. A \emph{weighing} matrix $W$ of order $n$
and weight $k$ is an $n \times n$ matrix $W$ with all its entries belonging to the set $\{-1,0,1\}$ such that
$$
W W^{T} = k I_n
$$
where the ``$T$" means ``transpose'' and $I_n$ is the identity matrix of order $n.$
\end{definition}
We recall the result of Arasu (\cite[Part (i) of Theorem 4]{Arasu}).
\begin{lemma}
\label{arasumain}
Let $n,k$ be positive integers such that $n=p^a\cdot m$, $k = p^{2b} \cdot u^2$, where $a,b,m,u$ are
positive integers such that the prime number $p$ does not divide $m$
and $p$ does not divide $u.$
Assume that there exists an integer $t$ such that
$$
p^t \equiv -1 \pmod{m}.
$$
If there exists a weighing matrix $W$ of order $n$ and of weight $k$ that is circulant then
$p=2$ and $b=1.$
\end{lemma}
We use the obvious decomposition below of a circulant Hadamard matrix of even order $n$ in four blocks of order $n/2,$
(see \cite{lhg} for another result based on the same decomposition), in order to build a weighing matrix attached to $H$.
\begin{lemma}
\label{weighing}
Let $H = circ(h_1, \ldots,h_n)$ be a circulant Hadamard matrix of order $n.$ Then
\begin{itemize}
\item[(a)]
\begin{equation*}
H=
\begin{bmatrix}
A & B\\
B & A\\
\end{bmatrix}
\end{equation*}
where $A,B$ are matrices of order $\frac{n}{2}.$
\item[(b)]
$K=A+B$ is circulant with entries in $\{-2,0,2\}.$
\end{itemize}
\end{lemma}
We build now the weighing matrix.
\begin{lemma}
\label{weighh}
Let $h$ be an odd positive integer. Assume that $H$ is a circulant Hadamard matrix of
order $n,$ where $n=4h^2.$ Then, there exists a weighing matrix $C$ of order $n/2$ and weight $n/4.$
\end{lemma}
\begin{proof}
Set $C = \frac{A+B}{2}$ where $A$ and $B$ are defined by Lemma \ref{weighing}. One has then that $C$
is circulant, of order $n/2 = 2h^2$ with all its entries in $\{-1,0,1\}.$ From $H H^{*} =I_n,$ one gets by block multiplication
$AA^{*}+BB^{*} = n I_{n/2}$ and $AB^{*}+BA^{*} =0.$ Thus,
\begin{equation}
\label{weic}
4 \cdot CC^{*}= AA^{*}+AB^{*}+BA^{*}+BB^{*} = AA^{*}+BB^{*} =n I_{n/2}.
\end{equation}
It follows from \eqref{weic} that $C$ is a weighing circulant matrix of order $n/2$ and weight $n/4.$
\end{proof}
We are now ready to show our main result from which, (essentially), we will be obtaining all our results.
\begin{lemma}
\label{maincomp}
Let $h$ be an odd positive integer exceeding $1$. Assume that $H$ is a circulant Hadamard matrix of
order $n,$ where $n=4h^2.$ Let $p$ be a prime divisor of $h$ and $r$ be the positive integer such that
$p^r \mid h$ but $p^{r+1} \nmid h.$ Set $s = h/p^r.$ Let $o_{m}(p)$ be the order of $p$ in the multiplicative group
$G = (\mathbb{Z}/m\mathbb{Z})^{*}$
of inversible elements of the ring $\mathbb{Z}/m\mathbb{Z}$, where $m=2s^2.$ Then,
$$
o_m(p)
$$
is an odd number.
\end{lemma}
\begin{proof}
Assume, to the contrary, that $o_{2s^2}(p)$ is even, say $o_{2s^2}(p) =2f.$ Then $p^f \equiv -1 \pmod{2s^2}.$
Then, by Lemma \ref{arasumain} applied to the weighing matrix $C$, of order $n/2$ and weight $n/4$,
defined by Lemma \ref{weighh}
with $a = 2r,$ $b =r,$ $m = 2s^2$, and $u=s$ that are all positive integers, and observing that we have
$\gcd(p,m)=1$ and $\gcd(p,u)=1$, we obtain the contradiction
\begin{equation}
\label{tiersexclu}
p=2.
\end{equation}
This proves the lemma.
\end{proof}
\begin{remark}
\label{obs}
Of course, if in the proof of Lemma \ref{maincomp},
we apply Lemma \ref{arasumain} to the full circulant weighing matrix $H$ (of order $n$, and of weight $n$),
instead to applying it to $C$,
we obtain no contradictions.
\end{remark}
In order to complete the results the following simple arithmetic result is key.
\begin{lemma}
\label{gauss}
Let $p$ and $q$ be two odd prime numbers such that the orders $o_q(p),$ the order of $p$ modulo $q,$
and $o_p(q),$ the order of $q$ modulo $p,$ are both odd. Then
$$
{p \overwithdelims () q} =1 \;\;\;\text{and}\;\;\; {q \overwithdelims () p} =1.
$$
where $\cdot \overwithdelims () \cdot$ is the Legendre's symbol.
\end{lemma}
\begin{proof}
Since $d := o_q(p)$ is odd, we have $p \equiv {\left ({(1/p)}^{\frac{d-1}{2}} \right )}^2 \pmod q.$ Analogously
$e := o_p(q)$ odd implies $q \equiv {\left ({(1/q)}^{\frac{e-1}{2}}\right)}^2 \pmod p.$ The result follows.
\end{proof}
\section{ Proof of Theorem \ref{mainar} and of Proposition \ref{computA}}
\subsection{Proof of Theorem \ref{mainar}}
Assume that there are only a finite number of such odd integers $h.$ Then, by Lemma \ref{maincomp},
there exists some odd positive integer
$h_0$ such that for any odd integer $h$ with $h \geq h_0$ every prime number $p$ such that $p \mid h$,
say, $h = p^r \cdot s$, with $p^{r+1} \nmid h,$ satisfy
\begin{equation}
\label{oor}
o_{2s^2}(p)\;\;\text{is odd}.
\end{equation}
Thus, \eqref{oor} implies that for every odd prime divisor $q$ of $2s^2$ one has
\begin{equation}
\label{oor1}
o_{q}(p)\;\;\text{is odd}.
\end{equation}
Write now $h = q^{t}d$ with $q \nmid d.$ one has also,
\begin{equation}
\label{oor2}
o_{2d^2}(q)\;\;\text{is odd}.
\end{equation}
Thus, for every odd prime divisor $r$ of $2d^2$ one has
\begin{equation}
\label{oor3}
o_{r}(q)\;\;\text{is odd}.
\end{equation}
Choose $r=p$ in \eqref{oor3}. One gets
\begin{equation}
\label{oor4}
o_{p}(q)\;\;\text{is odd}.
\end{equation}
This implies, by Lemma \ref{gauss} that ${p \overwithdelims () q} =1$ and that
${q \overwithdelims () p} =1$ for any other prime factor $q$ of $h.$ But this is false, since we can always choose
two distinct primes $p_1$ and $p_2$ both larger than $h_0$ and with, e.g.,
$$
{p_1 \overwithdelims () p_2} =1 \;\;\;\text{and}\;\;\; {p_2 \overwithdelims () p_1} =-1,
$$
and take
$$
h = p_1 \cdot p_2 \cdots p_k
$$
with any other distinct prime numbers (when $k>2$), $p_2, \ldots, p_k.$
This proves the theorem.
\subsection{Proof of Proposition \ref{computA}}
It is known (see \cite{BorMoss2}) that for all elements of $S$ but for a subset $T$
containing $1371$ elements $h$ the result holds. Using Lemma \ref{maincomp} and a
straightforward (included below for completeness) computer program that checked the conclusion
of the above lemma for all these $h$'s, and after about $7$ minutes of computation,
we obtained the result.
Here the program used:
\begin{verbatim}
# n's 4*h**2, with constraints on its odd prime divisors
with(numtheory):
tesp := proc(h)
local p,m,par,pris,el,mo,rr;
pris := ifactors(h); pris := op(2,pris);
if nops(pris) = 1 then RETURN(0); fi;
for par in pris do
p := op(1,par); m := op(2,par); el := iquo(h,p**m); mo := 2*el**2;
rr := order(p,mo);
if modp(rr,2) = 0 then RETURN(0); fi;
od;
RETURN(1);
end;
# checks the 1371 elements of the list uvals
seelm := proc()
local p,lis,c,st;
st := time(); c := 0; lis :=[];
read(mike1):
for p in uvals do
if tesp(p) = 1 then print([c,[1371],p]); lis := [p,op(lis)]; fi;
c := c+1;
if modp(c,100) = 0 then print([time() -st,c]) fi;
od;
lis;
end;
# the actual program runned is:
interface(prettyprint=0):
interface(quiet=true): st := time(); time() -st;
st := time(); z := seelm(); time() -st;
quit;
\end{verbatim}
\section{Barker sequences}
\label{barkers}
Suppose $x_1,x_2, \ldots, x_n$ is a sequence of $1$ and $-1$. We recall the following definition.
\begin{definition}
\label{barkerD}
A sequence $c_1,c_2, \ldots, c_{n-1},$ where
$$
c_j = \sum_{i=1}^{n-j} x_i \cdot x_{i+j}
$$
and the subscripts are defined modulo $n$, is called a \emph{Barker} sequence of length $n$ provided
$c_j \in \{-1,0,1\},$ for all $j=1,2, \ldots n-1.$
\end{definition}
The main known result is the following, (see \cite{turyn1}, \cite{shalomK}).
\begin{lemma}
\label{barkerk}
If there exists a Barker sequence of length $n>13$ then there exists a circulant Hadamard matrix of order $n.$
\end{lemma}
\begin{corollary}
\label{barkerdone}
For an infinity of odd integers $h$'s with an arbitrary fixed number $k$ of distinct prime divisors
there do not exists a Barker sequence of length $4h^2 >13.$ Moreover, there do not exists
a Barker sequence of length $4h^2 >13$ for all odd integers $h$ such that
$1<h<10^{13}.$
\end{corollary}
\begin{proof}
Follows from Lemma \ref{barkerk}, from Theorem \ref{mainar} and from Proposition \ref{computA}.
\end{proof}
\end{document}
|
math
|
\begin{document}
\title{Self joinings of rigid rank one transformations arise as strong operator topology limits of convex combinations of powers}
The following result is a straightforward modification of \cite[Section 2]{CE} of the first named author and A. Eskin, which is included for ease of future reference. What is below is an edited version of that section due to some added (but straightforward to address) difficulties to make the assumptions be conjugacy invariant and so hold for a residual set of measure preserving transformations. For connections to the work of others see that paper.
Let $([0,1],\mathcal{M},\lambda, T)$, be an ergodic invertible transformation. We say it is
\emph{rigid rank 1} if there exist numbers $n_j$ and measurable sets $A_j$ such that
\begin{enumerate}
\item\label{cond:big tower} $\underset{j \to \infty}{\lim}\, \lambda(\bigcup_{i=0}^{n_j-1}T^iA_j)=1$,
\item The sets $A_j,...,T^{n_j-1}A_j$ are pairwise disjoint.
\item $\underset{j \to \infty}{\lim}\, \frac{\lambda(T^{n_j}A_j \cap A_j)}{\lambda(A_j)}= 1$
\item\label{cond:bunched} For all $\varepsilon>0$ there exist metric balls of diameter at most $\varepsilon$, $B^{(j)}_0,...B^{(j)}_{n_j-1}\subset [0,1]$ such that
$$\underset{j \to \infty}{\lim}\, \sum_{i=0}^{n_j-1}\lambda(T^iA_j\setminus B^{(j)}_i)=0.$$
\end{enumerate}
Let
\begin{equation}\label{eq:def:Rk}
{\mathcal R}_k=\bigcup_{i=0}^{n_k-1}T^iA_k,
\end{equation}
\begin{equation}
\label{eq:def:hatRk}
\hat{{\mathcal R}}_k = \bigcup_{i=0}^{n_k-1} T^i( I_k \cap T^{-n_k} A_k \cap
T^{n_k} A_k),
\end{equation}
\begin{displaymath}
\tilde{{\mathcal R}}_k = \bigcup_{i=0}^{n_k-1} T^i( A_k \cap T^{-n_k} A_k \cap
T^{-2n_k} A_k \cap T^{n_k} A_k \cap T^{2 n_k} A_k),
\end{displaymath}
Then,
${\mathcal R}_k$ is the Rokhlin tower over $A_k$, $\hat{{\mathcal R}}_k$ is the Rokhlin
tower over $A_k \cap T^{-n_k}A_k\cap T^{n_k}A_k$, and $\tilde{{\mathcal R}}_k$ is the Rokhlin tower over
$\bigcap_{i=-2}^2T^{in_k}A_k$.
We have
\begin{equation}
\label{eq:property:hatRk}
\hat{{\mathcal R}}_k\supset \{x:T^ix \in {\mathcal R}_k \text{ for all }-n_k< i<n_k\},
\end{equation}
and
\begin{equation}
\label{eq:property:tildeRk}
\tilde{{\mathcal R}}_k\supset \{x:T^ix \in {\mathcal R}_k \text{ for all }-2n_k< i< 2n_k\}.
\end{equation}
Heuristically one can think of ${\mathcal R}_k$ as the set of points we can control. $\hat{{\mathcal R}}_k$ and $\tilde{{\mathcal R}}_k$ let us control the points for long orbit segments, which is necessary for some of our arguments.
\begin{lem}\label{lemma:srank est} $\underset{k \to \infty}{\lim}\lambda(\tilde{{\mathcal R}}_k)=1=\underset{k \to \infty}{\lim}\lambda({\mathcal R}_k)=\underset{k \to \infty}{\lim}\lambda(\hat{{\mathcal R}}_k)$.
\end{lem}
\begin{proof}By the first condition in the definition of rigid rank 1
we have $\underset{k \to \infty}{\lim}
\lambda({\mathcal R}_k)=1$. By (\ref{eq:def:hatRk}),
\begin{displaymath}
\lambda(\hat{{\mathcal R}}_k)\geq\lambda({\mathcal R}_k)-n_k\lambda(A_k\setminus
(T^{n_k}A_k \cup T^{-n_k}A_k))
\geq\lambda({\mathcal R}_k)-2n_k\lambda(A_k\setminus T^{n_k}A_k),
\end{displaymath}
and thus by the fourth condition of the definition of rigid rank 1 by
intervals, $\underset{k \to \infty}{\lim}\lambda(\hat{{\mathcal R}}_k) \to 1$.
Similarly, $\underset{k \to \infty}{\lim}
\lambda(\tilde{R}_k)=1$.
\end{proof}
\textbf{The operator }$A_\sigma$ \textbf{ and convergence in the strong operator topology.}
Let $\sigma$ be a self-joining of $(T,\lambda)$.
Let $\sigma_x$ be the corresponding measure on $[0,1]$ coming from disintegrating $\sigma$ along projection onto the first coordinate. Note this is a slight abuse, as we are identifying the measures on the fibers $\{x\}\times [0,1]$ with measure on $[0,1]$.
Define $A_\sigma: L^2(\lambda) \to L^2(\lambda)$ by $A_\sigma(f)(x)=\int f
d\sigma_x$.
Recall that one calls the \emph{strong operator topology}
the topology of pointwise convergence on $L^2(\lambda)$. That is
$A_1,...$ converges to $A_\infty$ in the strong operator topology if
and only if $\underset{ i \to \infty}{\lim}\|A_if-A_{\infty}f\|_2=0$ for all $f \in L^2(\lambda)$.
\begin{thm}
\label{theorem:SOT close}
Assume $([0,1],T,\lambda)$ is rigid
rank 1 and $\sigma$ is a self-joining of
$([0,1],T,\lambda)$. Then $A_\sigma$ is the strong operator topology
(SOT) limit of linear combinations, with non-negative coefficients, of powers of $U_T$, where $U_T:
L^2([0,1],\lambda) \to L^2([0,1],\lambda)$ denotes the Koopman
operator $U_T(f) = f \circ T$.
\end{thm}
Given $n \in \mathbb{Z}$, we obtain a self-joining of $([0,1],T,\lambda)$ carried on $\{(x,T^nx)\}$, $J(n)$ defined by $\int_{X\times X} fdJ(n)=\int_X f((x,T^nx))d\mu$. We call this an \emph{off-diagonal joining}.
\begin{cor}\label{cor:WOT close} (J. King \cite{flat stacks}) Any self-joining of a rigid rank 1 transformation is a weak-* limit of linear combinations, with non-negative coefficients, of off diagonal joinings.
\end{cor}
\subsection{Proof of Theorem \ref{theorem:SOT close}}
\begin{lem}
For each $0\leq j<n_k$ we have
\begin{equation}
\label{eq:lemma1:4}
n_k\int_{T^jA_k}\sigma_x ({\mathcal R}_k^c)d \lambda(x)\leq \lambda (\hat{{\mathcal R}}_k^c).
\end{equation}
\end{lem}
\textbf{Remark.} Note that $n_k$ is roughly $\lambda(T^jA_k)^{-1}$.
\begin{proof}
Suppose $0 \leq j < n_k$, and suppose $x \in T^j A_k$.
From (\ref{eq:property:tildeRk}) we have
$T^{i}{\mathcal R}_k^c \subset \hat{{\mathcal R}}_k^c$ for all $-n_k<i<n_k$.
We claim that
\begin{equation}
\label{eq:sigmax:Rkc}
\sigma_x ({\mathcal R}_k^c)\leq \sigma_{T^{\ell} x}(\hat{{\mathcal R}}_k^c) \qquad \text{for all $-n_k < \ell<n_k$.}
\end{equation}
Indeed,
$\sigma_{x}({\mathcal R}^c_k)=\sigma_{T^\ell x}(T^{\ell}{\mathcal R}^c_k)\leq
\sigma_{T^\ell x}(\hat{{\mathcal R}}_k^c)$, proving (\ref{eq:sigmax:Rkc}).
Integrating (\ref{eq:sigmax:Rkc}) we get
\begin{equation}
\label{eq:int:TjIk:sigma:y}
\int_{T^jA_k}\sigma_y({\mathcal R}_k^c)d\lambda(y)\leq
\int_{T^{j+\ell}A_k}\sigma_z(\hat{{\mathcal R}}_k^c) d\lambda(z) \quad\text{for all
$-n_k <\ell<n_k$.}
\end{equation}
Since we can choose $\ell$ in (\ref{eq:int:TjIk:sigma:y}) so that
$j+\ell$ takes any value in $[0,n_k-1]\cap \mathbb{Z}$, we get
\begin{equation}
\label{eq:int:Tj:Ik:min}
\int_{T^jA_k}\sigma_y({\mathcal R}_k^c)d\lambda(y)\leq \min_{0 \le i < n_k}
\int_{T^i A_k}\sigma_z(\hat{{\mathcal R}}_k^c) d\lambda(z).
\end{equation}
Now
$$\sum_{i=0}^{n_k-1}\int_{T^iA_k}\sigma_y(\hat{{\mathcal R}}_k^c)d\lambda(y)\leq
\int_{[0,1]}\sigma_y(\hat{{\mathcal R}}_k^c)d\lambda(y)=
\lambda(\hat{{\mathcal R}}_k^c),$$
where the last estimate
uses that $\sigma$ has projections $\lambda$. So we obtain
\begin{equation}
\label{eq:min:small}
\min\limits_{0\leq i<n_k} \int_{T^iA_k}\sigma_x
(\hat{{\mathcal R}}_k^c)d\lambda(x)\leq \frac 1 {n_k} \lambda(\hat{{\mathcal R}}_k^c).
\end{equation}
Now the estimate (\ref{eq:lemma1:4}) follows from
(\ref{eq:int:Tj:Ik:min}) and (\ref{eq:min:small}).
\end{proof}
We want to guess coefficients $c_j$ so that $A_\sigma$ is close to $\sum_{j=0}^{n_k-1}c_jU_T^j$. The next lemma comes up with a candidate pointwise version. Theorem \ref{theorem:SOT close} and Corollary \ref{cor:WOT close} will follow because by Egoroff's theorem this choice is almost constant on most of the $T^\ell I_k$
and the lemma after this (Lemma~\ref{lemma:other indices}), which shows that they are almost $T$ invariant.
\begin{lem}\label{lemma:A close} Let $x \in \hat{{\mathcal R}}_k \cap T^jA_k $ where $0\leq j<n_k$. Define $c_i(x)=\sigma_x(T^{a_i}A_k\cap {{\mathcal R}}_k)$ where $0\leq a_i<n_k$ and $i+j \equiv a_i \,( \text{mod }n_k)$.
For all 1-Lipschitz $f$ we have
\begin{multline*}\left|A_\sigma f(x)-\sum_{i=0}^{n_k-1}c_i(x)f(T^ix)\right|\leq \varepsilon+\|f\|_{\sup}\sigma_x({{\mathcal R}}_k^c) +\\
\|f\|_{\sup}\sigma_x\big(\cup_{i=0}^{n_k-1}(T^iA_k \setminus B_i^{(k)})\big)+\|f\|_{\sup}\sum_{i:T^ix\notin B_{a_i}^{(k)}}\sigma_x(T^iA_k).
\end{multline*}
\end{lem}
\begin{proof}First observe that
\begin{equation}\label{eq:tower approx}|A_\sigma f(x)-\sum_{i=0}^{n_k-1}\int _{T^iA_k}f d\sigma_x|\leq \|f\|_{\sup}\sigma_x({\mathcal R}_k^c)
\end{equation}
Now if $f(T^\ell x) \in B_i^{(k)}$ we have
\begin{equation*}|\int_{T^iA_k}fd\sigma_x-f(T^\ell x)\sigma_x(T^iA_k)|\leq \epsilon\|f\|_{Lip}\sigma_x(B_i^{(k)})+\|f\|_{\sup}\sigma_x(T^iA_k\setminus B_i^{(k)}).
\end{equation*}
By applying the above estimate if $T^ix\in B_{a_i}^{(k)}$ and estimating trivially if it isn't we obtain
\begin{multline*}
|\sum_{i=0}^{n_k-1}\int_{T^iA_k}f d\sigma_x-\sum_{i=0}^{n_k-1}f(T^ix)\sigma_x(T^{a_i}A_k)|= |\sum_{i=0}^{n_k-1}\int_{T^iA_k}f d\sigma_x-\sum_{i=0}^{n_k-1}c_{a_i}f(T^ix)|\\
\leq \sum_{i=0}^{n_k-1}\|f\|_{\sup}\sigma_x(T^iA_k\setminus B^{(k)}_i)+\|f\|_{\sup}\sum_{i:T^ix\notin B_{a_i}^{(k)}}\sigma_x(T^iA_k)+\epsilon\|f\|_{Lip}
\end{multline*}
Combining this with \eqref{eq:tower approx} gives the lemma.
\end{proof}
\begin{lem}
\label{lemma:other indices}
Suppose $0 \le \ell < n_k$.
If $x\in T^\ell A_k$ and $-\ell\leq i<n_k-\ell$ then
$$\sum_{j=0}^{n_k-1}|c_j(x)-c_j(T^ix)|\leq 2\sigma_x(\tilde{R}_k^c).$$
\end{lem}
\begin{proof}
Suppose $0 \le \ell < n_k$, $0 \leq j < n_k$, and $-\ell \le i <
n_k-\ell$.
First note that if $0 \le m < n_k$ and $z \in T^m A_k =T^mA_k\cap {R}_k$ then by
(\ref{eq:def:Rk}), we have $T^s z \in T^{m+s}A_k \cap
{R}_k$ for all $-m\leq s<n_k-m$. Thus, if $j + \ell <
n_k$ and $i+j+\ell < n_k$, we have
$$\sigma_{T^ix}(T^{i+j+\ell}A_k \cap {R}_k)=\sigma_{x}(T^{j+\ell}A_k \cap T^{-i}{R}_k)=\sigma_x(T^{j+\ell}A_k \cap {R}_k).$$
This gives $c_j(x)=c_j(T^ix)$ if $j+\ell <n_k$ and $i+j+\ell<n_k$.
By similar reasoning we have that $c_j(x)=c_j(T^ix)$ if $j+\ell\ge n_k$ and $i+j+\ell\geq n_k$.
Now lets assume that $j+\ell<n_k$ and $i+j+\ell\geq n_k$. Then,
\begin{equation}
\label{eq:cjTix}
c_{j}(T^ix)=\sigma_{T^ix}(T^{i+j+\ell-n_k}A_k \cap
{R}_k)=\sigma_x(T^{j+\ell-n_k}A_k\cap T^{-i}{R}_k).
\end{equation}
Also,
\begin{equation}
\label{eq:cjx}
c_j(x)=\sigma_x(T^{j+\ell}A_k \cap {R}_k).
\end{equation}
Now because $\tilde{R}_k\subset \bigcap_{i=-{n_k}}^{n_k}T^i{R}_k $, if $z \in T^{i+j+\ell-n_k} A_k \cap \tilde{R}_k$, then,
$z \in T^{j+\ell-n_k}A_k\cap T^{-i}{R}_k$, and $z \in
T^{j+\ell}A_k \cap {R}_k$. Therefore, the symmetric difference
between $T^{j+\ell-n_k}A_k\cap T^{-i}{R}_k$ and $T^{j+\ell}A_k
\cap {R}_k$ is contained in the union of $T^{i+j+\ell-n_k} A_k \cap
\tilde{R}_k^c$ and $T^{j+\ell} A_k \cap \tilde{R}_k^c$.
Thus, in view of (\ref{eq:cjTix}), and (\ref{eq:cjx}),
\begin{displaymath}
|c_j(x)-c_j(T^ix)|\leq
\sigma_x(T^{j+\ell+i-n_k}A_k \cap \tilde{R}_k^c)+
\sigma_x(T^{j+\ell}A_k \cap \tilde{R}_k^c).
\end{displaymath}
The last case, where $j+\ell \geq n_k$ and $0\leq i+j+\ell<n_k$ gives analogous bounds. So we bound $\sum_{i=0}^{n_k-1} |c_j(x)-c_j(T^ix)|$ by $2\sum_{i=0}^{n_k-1}\sigma_x(T^iA_k\cap \tilde{R}_k^c)\leq 2\sigma_x( \tilde{R}_k^c)$ and obtain the lemma.
\end{proof}
Let $d_{\mathcal{M}([0,1])}$ denote the Kantorovich-Rubinstein metric on
measures. That is
\begin{displaymath}
d_{\mathcal{M}([0,1])}(\mu,\nu)=\sup \left\{\left|\int fd\mu-\int f d\nu\right|:f \text{ is 1-Lipschitz}\right\}.
\end{displaymath}
Note, restricted to measures with total variation at most 1 it defines the same topology as the weak-* topology (on this set).
The next lemma is an immediate consequence of the definition of $d_{\mathcal{M}([0,1])}$.
\begin{lem}\label{lemma:kr est}If $f$ is 1-Lipshitz and
$d_{\mathcal{M}([0,1])}(\sigma_x,\sigma_y)<\epsilon$ then $|A_\sigma f(x)-A_\sigma
f(y)|<\epsilon$.
\end{lem}
We say $0 \le j<n_k$ is \emph{k-good} if there exists $y_j$ in $T^jA_k$ so
that
at least $1-\epsilon$ proportion of the points in $T^jA_k$ have their
disintegration is $\epsilon$ close to $y_j$.
That is
\begin{displaymath}
\lambda(\{x \in T^jA_k:
d_{\mathcal{M}([0,1])}(\sigma_x,\sigma_{y_j})<\epsilon\}) \ge (1-\epsilon) \lambda(A_k).
\end{displaymath}
\begin{lem}
\label{lemma:most:good}
For all $\epsilon>0$ there exists
$k_0$ so that for all $k>k_0$ we have
$$|\{0 \le j < n_k : j \text{ is k-good }\}|>(1-\epsilon)n_k.$$
\end{lem}
\begin{proof}
By Lusin's Theorem there exists a compact set $K$ of measure at least
$1-\frac {\epsilon^2} 8$ so that the map $y \to \sigma_y$ is
continuous with respect to the usual metric on $[0,1]$ and the metric
$d_{\mathcal{M}([0,1])}$ on measures. Because $K$ is compact this map is uniformly
continuous and so there exists $\delta>0$ so that $x,y \in K$ and
$|x-y|<\delta$ then $d_{\mathcal{M}([0,1])}(\sigma_x,\sigma_y)<\epsilon$. We choose
$k_0$ so that for $k>k_0$ there are $\hat{B}_i^{(k)}$ with $\operatorname{diam}(\hat{B}_i^{(k)})<\delta$ and $\lambda\big([0,1] \setminus
\cup_{i=0}^{n_k-1}(T^iA_k \cap \hat{B}_i^{(k)})\big)<\frac {\epsilon^2}8$ and $\lambda([0,1]\setminus \mathcal{R}_k)<\frac {\epsilon^2} 4$. (We can do this by Condition \eqref{cond:big tower} and \eqref{cond:bunched} of rigid rank 1.)
Let
\begin{displaymath}
\eta = \frac{1}{n_k}|\{0\leq j<n_k: \lambda\Big(T^j A_k\cap \big(K^c \cup (B_j^{(k)})^c\big)\Big)> \epsilon
\lambda(A_k)\}|.
\end{displaymath}
Then, because the $T^j A_k$ are disjoint and of equal size and
$\bigcup_{j=0}^{n_k-1} T^j A_k = {\mathcal R}_k$, it is clear that
\begin{displaymath}
\eta \epsilon \le \frac{\lambda(K^c \cup( \cup_{i=0}^{n_k-1}T^iA_k\setminus \hat{B}_i^{(k)} )^c\cap {\mathcal R}_k)}{\lambda({\mathcal R}_k)} \le \frac{\frac{\epsilon^2}4}{1-\frac {\epsilon^2}4}<\frac{\epsilon^2}2,
\end{displaymath}
and thus $\eta < \epsilon/2$. This completes the proof of the lemma.
\end{proof}
\noindent
\textbf{Notation.}
Let
\begin{multline*}V_j=\{x\in T^jA_k: \sigma_x\big(\cup_{i=0}^{n_k-1}(T^iA_k\setminus B_i^{(k)})\big)>(1-\epsilon)\lambda(A_k), \, \sigma_x(\tilde{R}_k^c)<\epsilon\\
\text{ and }
\sum_{i:T^ix\notin B_{a_i}^{(k)}} \sigma_x(T^iA_k)<\epsilon
\},
\end{multline*}
where $0\leq a_i<n_k$ is as in Lemma \ref{lemma:A close}.
If $j$ is $k$-good let
\begin{multline*}
G_j=\{x \in T^jA_k: \lambda(\{y \in
T^jA_k:d_{\mathcal{M}(Y)}(\sigma_x,\sigma_y)<2\epsilon\})>(1-2\epsilon)\lambda(A_k)
\end{multline*}
That is, $V_j$ is a subset of $T^j(A_k)$ where Lemma \ref{lemma:A close} gives a strong estimate and $G_j$ is the subset of $T^jA_k$
that are almost continuity points of the map $x \to \sigma_x$
(restricted to $T^jA_k$). We set $G_j=\emptyset$ if $j$ is not $k$-good.
\begin{lem}
\label{lemma:our point}
For all $\epsilon>0$ there exists $k_1$ so that for all $k>k_1$ there
exists $0\leq \ell<n_k$ and $y_k\in V_\ell$ so
that
\begin{equation}
\label{eq:good:yk}
|\{-\ell\leq j<n_k-\ell:T^j y_k \in G_{\ell+j} \cap V_{\ell+j} \text{ and }j \text{ is $k$-good}\}|>(1-13\sqrt{\epsilon})n_k.
\end{equation}
\end{lem}
\begin{proof} If $j$ is $k$-good then
$$\lambda(G_j)>(1-\epsilon)\lambda(A_k).$$
Let ${\mathcal R}_k^* = \bigcup_{j=0}^{n_k-1} G_j$.
Notice that $\underset{k\to \infty}{\lim} \, \lambda(\cup_{i=0}^{n_k-1}T^iA_k)=\underset{k\to\infty}{\lim}\, \lambda({\mathcal R}_k)=1$ and so for all large enough $k$ (so that $\lambda({\mathcal R}_k)$ is close to 1 and Lemma~\ref{lemma:most:good} holds) we have
$$\lambda({\mathcal R}_k^*)\geq (1-\epsilon)^2\lambda({\mathcal R}_k)>1-3\epsilon.$$
By a straightforward $L^1$ estimate, we have
\begin{multline*}
\sum_{\ell=0}^{n_k-1}\lambda(\{y\in T^\ell A_k:|\{-\ell \leq
j<n_k-\ell:G_j = \emptyset \text{ or } T^jy\not\in G_{j+\ell}\}|\geq 12\sqrt{\epsilon} n_k\} )<
\frac{ 3 \sqrt{\epsilon}}{12}=\frac {\sqrt{\epsilon}} 4
\end{multline*}
for all large enough $k$.
Now for the bound on $V_j$. Let $f_k(x)=\sigma_x(\tilde{{\mathcal R}}_k^c)$. Let $g_k(x)=\sigma_x\big(\cup_{i=0}^{n_k-1}(T^iA_k\setminus B_i^{(k)})\big)$. Let $a_i(x) \in [0,n_k-1]$
$j+i \equiv a_i(x) \, (mod\, n_k)$ where $x \in T^j(A_k)$ for $0\leq j<n_k$. Define
$$h_k(x)=\sum_{i:T^ix\notin B^{(k)}_{a_i(x)}} \sigma_x(T^iA_k).$$
By \eqref{cond:big tower} we have $\int f_k d\lambda \to 0$ and \eqref{cond:bunched} we have $\int h_kd\lambda, \, \int g_k d\lambda \to 0$. So by a straightforward $L^1$ estimate (and the fact that $f_k,\, g_k,\, h_k$ are non-negative)
\begin{equation}\label{eq:V big}\lambda(\{y:f_k(y)<\epsilon, \, g_k(y)<\epsilon \text{ and } h_k(y)<\epsilon\})>1-\frac \epsilon 4
\end{equation} for all large $k$.
Therefore $ \sum_{\ell=0}^{n_k-1}\lambda(\{y\in T^\ell A_k:|\{-\ell \leq
j<n_k-\ell: T^jy\not\in V_{j+\ell}\}|\geq \sqrt{\epsilon} n_k\} )<
\frac{ \sqrt{\epsilon}}{4}.$
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:SOT close}]
For each $k$ large enough so that Lemmas \ref{lemma:most:good} and \ref{lemma:our point} hold and $\lambda({\mathcal R}_k^c)<\epsilon$, let $y_{k}$ be as in the statement of Lemma~\ref{lemma:our point} and in particular, it is in $T^\ell A_k$ for some $0\leq \ell<n_k$.
\noindent
\textit{Step 1:} We show that for all 1-Lipschitz functions $f$ with $\|f\|_{\sup}\leq 1$ we have
$$\underset{k \to \infty}{\lim} \, \|A_\sigma f-\sum_{i=0}^{n_k-1}c_i(y_{k})U_T^if\|_2=0.$$
First, observe that because $T^jy_k \in V_{j+\ell}$, for some $\ell$ and $j$, Lemma~\ref{lemma:A close} and the fact that $\|f\|_{\sup}\leq 1$ imply,
\begin{equation}\label{eq:compare to coeff}
|A_\sigma f(T^jy_k)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^{i+j}y_k)|<
4\epsilon.
\end{equation}
From Lemma~\ref{lemma:kr est} we
have that if $x$ satisfies
\begin{equation}\label{eq:kr close}d_{\mathcal{M}(Y)}(\sigma_x,\sigma_{T^jy_k})<2\epsilon \end{equation}
then
\begin{multline*}|A_\sigma f(x)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^{i+j}y_k)|\leq |A_{\sigma}f(x)-A_{\sigma}f(y_k)|+\\
|A_\sigma f(T^jy_k)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^{i+j}y_k)|<2\epsilon+4\epsilon= 6\epsilon.
\end{multline*}
For any $x$ satisfying \eqref{eq:kr close},
\begin{multline*}
|A_\sigma f(x)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^ix)|\leq |A_\sigma f(x)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^{i+j}y_k)|+\\
|\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^{i+j}y_k)-\sum_{i=0}^{n_k-1}c_i(T^jy_k)f(T^ix)|<
6\epsilon+\\
\epsilon+ \sum_{i:T^ix \notin B_{a_i(x)}^{(k)}}\sigma_x(T^iA_k)+\sum_{i:T^{i+j}y_k \notin B_{a_i(y_k)}^{(k)}}\sigma_{T^{j}(y_k)}(T^iA_k).
\end{multline*}
Now if $x,T^j y_k\in V_{\ell+j}$ we have that this is at most $9 \epsilon$. Let
$$\hat{V}=\cup_{j\in [-\ell,n_k-\ell):T^j y_k \in V_{\ell+j}}V_{\ell+j}.$$
By Lemma
\ref{lemma:other indices} we have
$$\int_{\hat{V}}|A_\sigma f(x)-\sum_{j=0}^{n_k-1}c_j(y_k)f(T^jx)|d\lambda(x)\leq 9 \epsilon+ 2\sigma_{y_k}(\tilde{R}_k^c)\leq
9 \epsilon+2\epsilon.$$
Since $|A_{\sigma}f(x)-\sum_{j=0}^{n_k-1}c_j(y_k)f(T^jx)|\leq 2$ for all $x$, by H\"older's inequality
$$\int_{\hat{V}}|A_\sigma f(x)-\sum_{j=0}^{n_k-1}c_j(y_k)f(T^jx)|^2d\lambda(x)\leq
2\cdot 11\epsilon.$$
Since $y_k$ satisfies the assumptions of Lemma~\ref{lemma:our point},
we have that
\begin{equation}\label{eq:V big}\lambda(\{z: z\notin V_{y_k} \text{ or } z \text{ does not satisfy } \eqref{eq:kr close})<13\sqrt{\epsilon} n_k \lambda(A_k)+\epsilon+\lambda({\mathcal R}^c).
\end{equation}
Estimating trivially on $V^c$ we have
\begin{multline*}
\|A_\sigma f-\sum_{j=0}^{n_k-1}c_j(y_k)f\circ T^j\|_2^2=\int_0^1
|A_\sigma f(x)-\sum_{j=0}^{n_k-1}c_j(y_k)f(T^jx)|^2 \, d\lambda(x)
\leq \\ \leq
2 \cdot \big(13\epsilon+13\sqrt{\epsilon}\big).
\end{multline*}
Since $\|f\|_{\sup}\leq 1$ and $\epsilon$ is arbitrary this establishes Step 1.
\noindent
\textit{Step 2:} Completing the proof.
Step 1 establishes pointwise convergence for a subset of $L^2$ with dense span. Because the linear operators in our sequence have uniformly bounded $L^2$ operator norm (in fact bounded by 1) this gives pointwise convergence on all of $L^2$; that is, SOT convergence.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:WOT close}]
Let $\hat{\delta}_p$ denote the point mass at $p$.
By the proof of
the theorem that there exists $z$ (it is $T^jy_k$ in the proof) so that
\begin{displaymath}
d_{\mathcal{M}(Y)}(\sigma_x,\sum_{j=0}^{n_k-1}c_j(z)\hat{\delta}_{(x,T^ix)})<8\epsilon
\end{displaymath}
for all $x \in V$. By
(\ref{eq:V big}) we may assume $\lambda(V^c)$ is as small as we want. The corollary follows.
\end{proof}
\end{document}
|
math
|
Audi Avant Coming Back To The US?
Porsche Taycan EV Caught The Eyes Of Tesla Buyers?
2019 Audi R8: Is It Better Than First-Gen?
Next Audi R8: EV Future Won’t Halt ICE Improvements!
|
english
|
Looking for more CN products, Click HERE!
Tencent is one of the largest Internet integrated service providers in China and one of the Internet companies with the most service users in China. Since its establishment more than 10 years ago, Tencent has been adhering to all business values based on user value, and has always been in a steady, rapid development. It is Tencent's mission to improve the quality of human life through Internet services. At present, Tencent provides users with "one-stop online living services" as a strategic goal, providing Internet value-added services, mobile and telecommunications value-added services and online advertising services. Through instant messaging QQ, WeChat, Tencent Games, QQ Space, Wireless Portal, Soso, Pat, Tenpay and other leading Chinese online platforms, Tencent has built China's largest online community to meet Internet users' communication, information, entertainment and E-commerce and other needs.
Tencent Q coins (CN) is a kind of virtual currency that can be uniformly paid by Tencent and launched by Tencent. It can now be used to pay for all QQ services, including purchase of QQ number, QQ membership services, QQ friends, QQ greeting cards, QQ pets and other services.
|
english
|
टेक्नोलॉजीज - शेन्ज़ेन फिलिफ मेडिकल इंक
फिलिफ का आत्मा: मनुष्यों के लिए खुराक के ३ तरीका प्रदान करते हैं
समझदारी आवृत्ति साँस लेने के लिए अनुकूल का छिड़काव करें। स्प्रे दर और राशि गहरी फेफड़ों में तरल के बयान की दर में वृद्धि करने के लिए मरीज की सांस लेने आवृत्ति के अनुसार समायोजित किया जा सकता। एक आवृत्ति रूपांतरण नेबुलिझशन के तीन बार वर्ष नेबुलिझशन के मोड के बराबर है।
२. स्व सफाई प्रौद्योगिकी
एक उपकरण है जो कणित्र टुकड़ा स्वचालित रूप से बिजली पर फ्लश कर सकते हैं। जब चालू कणित्र स्वत: सफाई मोड में दर्ज करें, और शुरुआत ५ सेकंड पर एक बड़ा प्रभाव बल के साथ अवशिष्ट दवा बाहर फेंकने के लिए, दवा रुकावट की घटना को कम करने के लिए शक्ति अप क्रैंक होगा।
३. तापमान नियंत्रण प्रेरण और सूखी जलन रोकथाम प्रौद्योगिकी
तापमान नियंत्रण प्रेरण: एक एनटीसी प्रेरण कणित्र। नेबुलिजर एक एनटीसी टर्मिस्टर जो दवा कप में समाधान के तापमान का पता लगा सकते है, यह स्वचालित रूप से जब तापमान ४५ तक पहुँच जाता है तापमान कम होगा, मुख्य रूप से तरल की गतिविधि की रक्षा के लिए।
सूखी जल रोकथाम प्रौद्योगिकी: यदि तरल गोदाम में समाधान समाप्त हो रहा है भावना की हीटिंग घटता का उपयोग कर। कणित्र बंद नीचे जब यह तरल गोदाम में समाधान इंद्रियों सूखी जलने से नष्ट किया जा रहा से कणित्र टुकड़ा की रक्षा के लिए बाहर चलाने के लिए है।
४. लिथियम बैटरी टाइनेट संचालित प्रणाली
उच्च तापमान प्रतिरोध, विस्फोट प्रूफ समारोह और उच्च सुरक्षा प्रदर्शन, चार्ज और डिस्चार्ज ३००० चक्र तक पहुँच सकते हैं के साथ, सेवा जीवन ५ वर्ष तक है।
५. मेश नेबुलिजर टुकड़ा सामग्री और लेजर सुपरफाइन छिद्रण प्रौद्योगिकी
टाइटेनियम मिश्र धातु कणित्र टुकड़ा जंग का विरोध कर सकते हैं, अच्छा जैव के साथ, नेबुलिझशन बनाता सुरक्षित।
लेजर वेध २.५उम में ध्यान में लीन होना व्यास रहता है , गहरी फेफड़ों में तरल के बयान की दर में वृद्धि करने के लिए २५%, स्प्रे कणों और अधिक नाजुक बना देता है, दवा अवशोषण और प्रभावकारिता बेहतर हो बनाता है।
|
hindi
|
आगरा: प्रेमिका ने प्रेमी को पहले पिलाई शराब, फिर गला रेत कर हत्या कर दी
इश्क के जुनून में लोग कत्ल भी कर सकते हैं. आगरा में सोमवार को ऐसा ही एक मामला देखने को मिला. जिसमें एक प्रेमिका ने अपने प्रेमी को अपने साथियों के साथ मिलकर मौत के घाट उतार दिया.
इश्क के जुनून में लोग कत्ल भी कर सकते हैं. आगरा में सोमवार को ऐसा ही एक मामला देखने को मिला. जिसमें एक प्रेमिका ने अपने प्रेमी को अपने साथियों के साथ मिलकर मौत के घाट उतार दिया. शव की पहचान न हो पाए और सबूत मिटाने के लिए प्रेमिका ने अपने प्रेमी का सिर पत्थरों से कुचल दिया. पूरा मामला आगरा के थाना फतेहपुर सीकरी का है. जहां ३० सितंबर को स्मारक के पीछे एक अज्ञात शव खून से लथपथ हालत में मिला. शव के पास शराब की खाली बोतलें पड़ी हुई थीं. सर को बुरी तरह कुचल दिया गया था और चेहरा पहचान में नहीं आ रहा था.
यह भी पढ़ें- एप्फ स्कम: उप्क्ल के पूर्व एमडी अयोध्या प्रसाद मिश्रा गिरफ्तार
पुलिस ने जब शव की तलाशी ली तो किसी भी तरह का कोई पहचान पत्र या कागज नहीं मिले. लेकिन मृतक की शर्ट में भटिंडा से आगरा का ट्रेन टिकट मिला. ट्रेन की टिकट पुलिस के लिए एक अहम सुराग साबित हुआ. आनन-फानन में आगरा से पुलिस टीम भटिंडा के लिए रवाना हुआ. भटिंडा में गहरी पड़ताल की तो पता चला कि मृतक भटिंडा का ही रहने वाला था.
अब बारी थी हत्यारे और हत्या के कारणों का पता लगाने का. पुलिस ने जांच में पाया कि बंटी का प्रेम प्रसंग भटिंडा की रहने वाली एक महिला के साथ चल रहा था. प्रेमिका भटिंडा से किसी के साथ आकर आगरा में रहने लगी. तब तक बंटी ने एक दूसरी महिला के साथ शादी कर ली. बंटी का शादी करना उसकी प्रेमिका को बुरा लगा. बदला लेने के लिए उसने बंटी को फतेहपुर सीकरी स्मारक दिखाने के बहाने बुलाया. वहां उसने बंटी को शराब पिलाई और नशे की हालत में उसकी गला रेत कर हत्या कर दी गई.
|
hindi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.