text
stringlengths 1
1.57M
⌀ |
---|
How to Use GIFs to Teach Computers About Emotions
Deep in the bowels of the avant-garde, glass and metal MIT Media Lab, graduate student Kevin Hu is making faces into an ornate mirror.
He opens his eyes and mouth as wide as possible in a caricature of shock. A hidden webcam analyzes his facial expression in real time, digs through a vast database for GIFs that convey a similar emotion, and projects them on the surface of the mirror, against Hu’s reflection. In quick succession it spits out a series of disparate images: a surprised anime character, an affronted Walter White, and then a man in a crowd with an astonished, wide-open mouth much like Hu’s own.
Next Hu contorts his face into a rictus-like grin (“I can smile,” he mutters) and an exuberant basketball player appears on the mirror before being replaced by Snow White, who claps her hands in delight. She’s not emulating Hu’s face exactly, but when it comes to finding a GIF for every mood, she’s a fairly decent simulacrum.
Hu and collaborator Travis Rich, a PhD candidate at the Media Lab, built the mirror to demonstrate a remarkable ongoing project meant to find a whole new use for one of the Internet’s favorite toys. Back in March, the two launched a site called GIFGIF, which had a modest premise: Show people a pair of random GIFs, and ask them which better expresses a given emotion. For instance, it might ask you whether Arrested Development’s Lucille Bluth or a gloomy Kurt Cobain seems more surprised. Or it might show you a bowing Robin Hood from Disney’s 1973 animated feature and a shrugging Donald Glover, and ask which better expresses pleasure. Sometimes the answer is clear; if it isn’t, you can click “neither.”
The goal was to harness crowdsourcing to map emotions, a task at which computers are very poorly equipped. Eventually, Hu and Rich hope, all that subjective data will make it easier to write programs that deal with emotional content.
“There are all these things that have meaning to us,” says Rich. “But it’s hard to translate those into code.”
The GIFGIF site asks users to determine the emotional content of GIFs. Screengrab: WIRED
Giving Programmers Tools to Help Machines Understand Feelings
After its launch, GIFGIF quickly went viral—helped along by mentions in, among others, USA Today and The Washington Post—and the corresponding explosion in traffic jumpstarted a database that has since grown to include more than 2.7 million votes. That trove of GIFs, each tagged with weighted emotional characteristics, opens up some unprecedented possibilities. For example, you can query it for a GIF that’s 60 percent amused, 30 percent disgusted, and 10 percent relieved, with results that often show startling insight. These capabilities make it a potential goldmine for everyone from researchers who study facial expressions to app developers who want to suggest content based on a user’s emotional needs.
It’s with those sorts of applications in mind that Hu and Rich are now preparing to release two tools that build on GIFGIF. The first, an open API being released this week, will let anyone with an app or website query the dataset to return a GIF with particular emotional content. It’s already opened up new avenues for researchers. “Travis and Kevin are doing some awesome work,” says Brendan Jou, a PhD candidate at Columbia University who recently published a paper on predicting perceived emotions using an alpha version of the GIFGIF API.
But it’s the tool that’s coming after the API, a platform they call Quantify that they’ll be releasing later this month, which opens up even deeper possibilities.
The idea behind Quantify is to let anybody start a project like GIFGIF, including for things other than GIFs. A project about food, for example, could build a dataset of which meals or dishes respondents see as appropriate for specific contexts and slowly build an index of food concepts for various scenarios. For example, you probably wouldn’t eat mashed potatoes and gravy on a warm summer morning, but you likely crave ice cream when you’re sad or want home-cooked dinners when you’re lonely. With enough responses in a campaign about food, a programmer could write an app that recommends grub based on your emotional state. It could even glean respondents’ relative locations using IP addresses—information that can be used to determine if those recommendations should be different based on the user’s region.
Broader Applications
Quantify also presents tantalizing possibilities for marketers. An automobile manufacturer, say, could create a project that showed conceptual dashboards or steering wheels to respondents in order to develop data on what consumers associate with nebulous concepts like safety or luxury. Though they won’t divulge who, Hu and Rich say they’ve already had discussions about Quantify with several high profile corporate sponsors at the Media Lab.
“Now, instead of having a designer that knows all of these things, you can sort of programmatically say, ‘OK, it’s for a Chinese market, and they prefer this mixture of luxury and safety so we’ll design it this way,'” Rich says. “Because we have all this human data that’s being collected and IP located, we know what German preferences are and what Chinese preferences are and what Brazilian preferences are.”
There are also broad applications in the social sciences. To test Quantify, Hu and Rich helped Carnegie Mellon professor William Alba develop a project called Earth Tapestry, which shows pairs of locations (Mount Kilimanjaro, the Large Hadron Collider, Stonehenge) and asks which better expresses various properties (durability, nobility, delightfulness). If all goes according to plan, the dataset collected on Earth Tapestry will be laser-engraved on a sapphire disk and sent to the Moon on the Astrobotic lunar lander by 2016.
“I wrote Travis and Kevin last May because I had been seeking a method that would translate individual pairwise choices into a ranking,” Alba says. “They went light-years further than I had hoped.”
And that’s just a taste of what they’ve tried so far. Rich and Hu say being able to teach computers how to recommend based on feelings and emotions could have applications in fields from psychological and behavioral studies to artificial intelligence. It just depends on how programmers want to use them. One app Rich says he’d love to see is one that analyzes the text of an instant message and suggests a GIF that matches its emotional palette. (No more searching “Beyoncé side-eye” when your friend tells you about a bad date!)
Back in the Media Lab, Hu again steps in front of the mirror and tries an even more exaggerated look of astonishment. The mirror goes blank for a moment, then it loops a GIF of a wild-eyed skydiver waving his arms in free-fall.
“That’s a good surprised one,” Rich says to Hu. “Were you trying to be surprised?”
Here’s The Thing With Ad Blockers
We get it: Ads aren’t what you’re here for. But ads help us keep the lights on. So, add us to your ad blocker’s whitelist or pay $1 per week for an ad-free version of WIRED. Either way, you are supporting our journalism. We’d really appreciate it. |
Q:
Using Cloud Run on a PubSub topic
It was not clear to me how to use Cloud Run on a PubSub topic for for medium-run tasks (inside of the time limit of Cloud Run, of course.)
Let's see this example taken from the tutorials[1]:
app.post('/', (req, res) => {
if (!req.body) {
const msg = 'no Pub/Sub message received'
console.error(`error: ${msg}`)
res.status(400).send(`Bad Request: ${msg}`)
return
}
if (!req.body.message) {
const msg = 'invalid Pub/Sub message format'
console.error(`error: ${msg}`)
res.status(400).send(`Bad Request: ${msg}`)
return
}
const pubSubMessage = req.body.message
const name = pubSubMessage.data
? Buffer.from(pubSubMessage.data, 'base64').toString().trim()
: 'World'
console.log(`Hello ${name}!`)
res.status(204).send()
})
My doubt is: Should it return HTTP 204 only after the task finishes, otherwise the task will terminated sudden?
1 - https://cloud.google.com/run/docs/tutorials/pubsub
A:
My doubt is: Should it return HTTP 204 only after the task finishes,
otherwise the task will terminated sudden?
You do not have a choice. If you return before your task/objective finishes, the CPU will be idled to zero and nothing will happen in your Cloud Run instance.
In your example, you are just processing a pub/sub message and extracting the name. If you return before this is finished, no name will be processed.
Cloud Run is designed for an HTTP Request/Response system. This means processing begins when you receive an HTTP Request (GET, POST, PUT, etc.) and ends when your code returns an HTTP Response (or just returns with no response). You might try to create background threads but there is no guarantee that they will execute once your main function returns.
|
PYREX™ Disposable Round-Bottom Rimless Glass Tubes
Capacity: 4.0mL; O.D. x L: 10 x 75mm
Manufactured from borosilicate glass to reduce pH changes and contaminants. Corning™ PYREX™ Disposable Round-Bottom Rimless Glass Tubes are designed for both tissue culture and general bacteriological work. |
Sustainable seafood cheat sheet
Our oceans, which once seemed to hold limitless supplies of food and resources, are now suffering under the burden of rising temperatures and the increasing pressure of overfishing.
Organic and local are so 2007. Mark our words, sustainably-sourced fish will be the trend to make major waves in the next few years. Our oceans, which once seemed to hold limitless supplies of food and resources, are now suffering under the burden of rising temperatures and the increasing pressure of overfishing. Save the whales. Yes, but save the salmon and the tuna and the sharks, too! So which fish choices are the most sustainable?
To answer this question, the Monterey Bay Aquarium in California offers Seafood Watch -- a one-stop portal of information about issues, such as habitat damage and overfishing, and recommendations for the most responsible seafood choices. Plus, the site has a detailed guide to each seafood type, with info about where that particular species is caught and how, as well as mercury warnings, and scientific facts. Here's the coolest part: You don't have to memorize all that info. Rather, download and print a Seafood Watch Pocket Guide (there's a different sustainable seafood cheat sheet for six U.S. geographic regions). Each cheat sheet lists the best choices, good alternatives, and fish to avoid. Just fold it up. Put it in your wallet. Use it to make responsible, sustainable seafood choices in the grocery store or restaurant. The aquarium is also offering a brand new sustainable sushi cheat sheet. And don't forget to educate your favorite restaurants: Print out an Action Card that informs chefs of the issues and thanks them for offering environmentally responsible seafood. |
Q:
Django middleware 'module' object is not callable
I have problem with middleware I found a lot of questions about it but nothing help in my case.
I use middleware to get current_user to use in my model to save modified user in save method without write this in view.
Here is original post with this code:
Middleware
from threading import local
_user = local()
class CurrentUserMiddleware(object):
def process_request(self, request):
_user.value = request.user
def get_current_user():
return _user.value
There is something wrong in this code because I'm getting error like:
Traceback (most recent call last):
File "C:\\Program Files (x86)\\Python35-32\\Lib\\wsgiref\\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\contrib\\staticfiles\\handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 158, in __call__
self.load_middleware()
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\core\\handlers\\base.py", line 53, in load_middleware
mw_instance = mw_class()
TypeError: 'module' object is not callable
[20/Jul/2016 10:51:44] "GET /panel/ HTTP/1.1" 500 59
Traceback (most recent call last):
File "C:\\Program Files (x86)\\Python35-32\\Lib\\wsgiref\\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\contrib\\staticfiles\\handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 158, in __call__
self.load_middleware()
File "C:\\Users\\loc\\dJangoEnvironment\\lib\\site-packages\\django\\core\\handlers\\base.py", line 53, in load_middleware
mw_instance = mw_class()
TypeError: 'module' object is not callable
[20/Jul/2016 10:51:44] "GET /favicon.ico HTTP/1.1" 500 59
My middleware settings:
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.gzip.GZipMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'reversion.middleware.RevisionMiddleware',
'task.current_user',
)
Can you give me some advice where is error or something? I don't have any other idea to try so I hope that you have some.
A:
You must give the full path of your middleware class. Not the module containing the middleware.
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.gzip.GZipMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'reversion.middleware.RevisionMiddleware',
'task.current_user.CurrentUserMiddleware',
)
Note that you had plenty of examples right above.
|
Prevalence and management of hypertension among Turkish, Moroccan and native Dutch ethnic groups in Amsterdam, the Netherlands: The Amsterdam Health Monitor Survey.
To assess ethnic differences in the prevalence and management of hypertension among Turkish, Moroccan and native Dutch ethnic groups in Amsterdam, the Netherlands. A cross-sectional survey. A random sample of 1304 adults aged 18 years and over. Of these, 39.2% were Dutch, 33.2% were Turkish and 27.6% were Moroccan. The prevalence of hypertension was lower in Turkish (men 25.8% and women 22.2%) and Moroccan (men 26.1% and women 19.6%) than in Dutch individuals (men 48.8% and women 35.0%). Except for Turkish women, these differences persisted after adjustment for age and body mass index: the odds ratios (95% confidence interval) for being hypertensive were 0.47 (0.30-0.74; P < 0.001) for Turkish men, 0.48 (0.30-0.76; P < 0.001) for Moroccan men and 0.51 (0.28-0.94; P = 0.03) for Moroccan women. Only Moroccan hypertensive women were less likely than Dutch women to be aware of their condition 0.31 (0.11-0.81; P < 0.01) and to be treated 0.32 (0.12-0.88; P < 0.01) for hypertension. There were no differences in hypertension control between the ethnic groups in both men and women. The lower prevalence of hypertension among Moroccan men may contribute to the low cardiovascular disease (CVD) mortality reported among this group in the Netherlands. The differential risks in CVD mortality between Moroccan men and women may partly result from the lower hypertension awareness and treatment rates in Moroccan women. Strategies aimed at improving the detection and treatment of hypertension among Moroccan women may improve the sex disparity in cardiovascular mortality between Moroccan men and women in the Netherlands. |
What is a positive sentinel lymph node in a breast cancer patient? A practical approach.
Sentinel lymph node (SN) biopsy has become increasingly used for the staging of breast carcinoma, resulting in the upstaging of this disease, and this has led to concerns with regard to what should be considered a positive SN. Factors influencing the positive staging of an SN include metastasis size, the method used for metastasis detection, the definition of metastasis and the individual pathologist. Until evidence to the contrary emerges, an SN should be considered positive if metastases (nodal involvement >0.2mm in the largest dimension) are detected in it by histology. A target size should be identified, and SNs, as the most likely sites of nodal metastases, should be searched systematically to find (nearly) all of the targeted metastases. The European guidelines for SN assessment have set two such target sizes: as a minimum, all metastases >2mm should be identified, and optimally all micrometastases should also be sought. |
1.. Introduction {#S0001}
================
Pylephlebitis, or suppurative thrombophlebitis of the portal mesenteric venous system, is an uncommon condition associated with significant morbidity and mortality.\\[[@CIT0001]\\] It most commonly presents with abdominal pain and fever, and may complicate intra-abdominal or pelvic infection occurring in the region drained by the portal venous system. The most common causes include pancreatitis and diverticulitis along with other intra-abdominal inflammatory processes.\\[[@CIT0002]\\] An associated hypercoagulable state is found in approximately 40% of patients.\\[[@CIT0003]\\] It has been reported as a complication of hemorrhoidal banding, intragastric migration of a silicone gastric band, and computerized tomography (CT)-guided liver biopsy.\\[[@CIT0004]--[@CIT0006]\\] Post-colonoscopy pylephlebitis has rarely been reported in the literature,\\[[@CIT0007]\\] and diagnosis in this setting is often challenging owing to non-specific symptoms at presentation. In this report, we will outline the approach to diagnosis and management of post colonoscopy pylephlebitis.
2.. Case presentation {#S0002}
=====================
A 75-year-old female with a history of colon polyps underwent a screening colonoscopy which showed indeterminate colitis. The patient did not have any gastrointestinal symptoms prior to the procedure. A single-piece polypectomy of a sessile 3 mm transverse colon polyp was performed that revealed tubular adenoma. Two weeks later she presented to the emergency department with complaints of fever, malaise, melena, occasional nausea, vomiting and lower quadrant abdominal pain. The patient stated that the melena started after the colonoscopy and had decreased in frequency and severity over time. The review of system was otherwise unremarkable. On examination, she was febrile (38.7°C) and tachycardic (heart rate 117 bpm), with mild tenderness in the right lower quadrant, while the rest of the examination was unremarkable. Laboratory tests showed white blood cell count of 19,600 cells μl^--1^ (reference range 4800--10,800 cells μl^--1^) and normal hemoglobin, liver enzymes and coagulation profile. Blood cultures were obtained and she was started on intravenous piperacillin and tazobactam. A CT of the abdomen showed gas which tracked along the inferior mesenteric vein to the portal vein, and to a limited degree into the liver. There was no evidence of pneumatosis or pneumoperitoneum. A hyperdensity within the portal vein was also visualized on the CT scan ([Figure 1](#F0001){ref-type="fig"}). A hepatoportal ultrasound showed moderate non-occlusive thrombus within the main portal vein ([Figure 2](#F0002){ref-type="fig"}). Blood cultures turned positive for *Bacteroides fragilis*, while stool studies were negative. These findings were highly suggestive of pylephlebitis. She was cautiously started on unfractionated heparin. Repeat blood cultures after 48 h of IV antibiotics returned negative. The patient was switched to ciprofloxacin and metronidazole orally to complete a total of six weeks of antibiotics as per infectious disease recommendations. The heparin was transitioned to dabigatran for at least three months of anticoagulation. Upon follow-up with gastroenterology as outpatient, the patient was symptom free at three months and hence the anticoagulation was discontinued.Figure 1.Computerized tomography of the abdomen showing hyperdensity (clot) in the portal vein. Figure 2.Abdominal ultrasonography revealing moderate non-occlusive thrombus within the main portal vein.
3.. Discussion {#S0003}
==============
Pylephlebitis was first described in 1846 by Waller, who discovered it as the source of a hepatic abscess during autopsy. Although the exact incidence of this uncommon entity is not certain as experience is mainly limited to case reports and series, estimates suggest it to be 2.7 per 100,000 person-years.\\[[@CIT0008],[@CIT0009]\\] It begins with thrombophlebitis of small veins draining an area of infection. Further extension into the larger veins leads to septic thrombophlebitis of the portal vein and eventually of the mesenteric veins.\\[[@CIT0010]\\] Presenting symptoms often include fever and abdominal pain while rigors, nausea and vomiting are less commonly reported.\\[[@CIT0001]\\]
The diagnosis of pylephlebitis is often delayed since it is an uncommon condition that can present with non-specific symptoms, but due to advances in diagnostic imaging such as CT and ultrasonography, there has been an increased recognition of this condition. Diagnostic imaging modalities depend on individual expertise but both ultrasonography and CT scan may reveal the thrombus in the portal vein. CT scan provides the advantage of identifying an underlying focus of infection elsewhere in the abdomen or pelvis.\\[[@CIT0011]\\] In our patient, diagnosis was established with both imaging modalities. Bacteremia can be present from 44 to 88% of cases and hence it is important to draw blood cultures in patients presenting with fever and gastrointestinal symptoms.\\[[@CIT0001],[@CIT0002]\\] The bacteremia is often polymicrobial and *Bacteroides fragilis* is the most common isolate reported, as was seen in our patient.\\[[@CIT0001]\\]
Routine colonoscopy with polypectomy is associated with a low rate of bacteremia, with mean rates of around 4%.\\[[@CIT0012]\\] Pylephlebitis owing to colonic polypectomy has been rarely reported. Gallinger et al. \\[[@CIT0007]\\] reported a 78-year-old female with a history of IgM monoclonal gammopathy of unknown significance (MGUS) who developed pylephlebitis following polypectomy performed six weeks prior to presentation. The index patient was at high risk of pylephlebitis due to the underlying hypercoagulable state. Our patient presented within two weeks of polypectomy and had no evidence of underlying hypercoagulable condition.
Antibiotics constitute the major treatment for pylephlebitis. No randomized trials have evaluated the optimal antibiotic regimen for this disease. The empiric antibiotic regimen should be based on the probable source of infection. As the infection is often polymicrobial, the antibiotic regimen should ideally include coverage for both Gram-negative aerobes and anaerobes, especially *Bacteroides fragilis*. The typical duration of antibiotic therapy is at least four to six weeks.\\[[@CIT0001]\\] There is no general consensus on the use of or duration of anticoagulation therapy. Studies suggest three to six months of anticoagulation treatment if no other underlying thrombotic disease is present.\\[[@CIT0013]\\] The rationale for anticoagulation is to prevent propagation of thrombus and further complications. Kanellopoulou et al. \\[[@CIT0014]\\] reported that the early use of anticoagulation in portal vein thrombosis may minimize serious sequelae and speed up recanalization. Choudhry et al. \\[[@CIT0002]\\] also reported a lower mortality likely attributed to the early use of anticoagulation. We opted for anticoagulation based on the lower reported mortality in the recent literature.
In summary, our patient most likely suffered an uncommon complication of colonoscopy. Although rare, pylephlebitis should be considered in the differential diagnosis of patients presenting with unexplained fever and gastrointestinal symptoms after endoscopic procedures.
Disclosure statement {#S0004}
====================
No potential conflict of interest was reported by the authors.
|
The efficacy of premixed nitrous oxide and oxygen for fiberoptic bronchoscopy in pediatric patients: a randomized, double-blind, controlled study.
s: The aim of the study was to evaluate the efficacy and safety of premixed 50% nitrous oxide and oxygen on the quality of sedation and pain control during fiberoptic bronchoscopy (FB) in children. A prospective, randomized, double-blind study. Pediatric pulmonary department in a pediatric tertiary university hospital. One hundred five children aged 1 month to 18 years. Patients inhaled after sedation and local anesthesia either premixed 50% nitrous oxide and oxygen (nitrous oxide group) or premixed 50% nitrogen and oxygen (control group) during FB. The rate of failure was significantly greater in the control group (62%) than in the nitrous oxide group (21%, p = 0.00003). The efficacy of premixed 50% nitrous oxide and oxygen was also demonstrated with higher satisfaction scores (p = 0.000001), lower Children's Hospital of Eastern Ontario Pain Scores (p = 0.002), better visual analog scale ratings (p = 0.03), and improved behavior scores. Side effects were minor and similar in both groups. This study demonstrates the improved efficacy of sedation, pain control, and safety of premixed 50% nitrous oxide and oxygen for FB in children. |
Q:
ifelse behaviour with which function in R
Just been playing with some basic functions and it seems rather strange how ifelse behaves if I use which() function as one of the arguments when the ifelse condition is true, e.g.:
#I want to identify the location of all values above 6.5
#only if there are more than 90 values in the vector a:
set.seed(100)
a <- rnorm(100, mean=5, sd=1)
ifelse(length(a)>90, which(a>6.5), NA)
I get this output:
[1] 4
When in fact it should be the following:
[1] 4 15 25 40 44 47 65
How then can I make ifelse return the correct values using which() function?
It seems it only outputs the first value that matches the condition. Why does it do that?
I would appreciate your answers.Thanks.
A:
You actually don't want to use ifelse in this case. As BondedDust pointed out, you should think of ifelse as a function that takes three vectors and picks values out of the second two based on the TRUE/FALSE values in the first. Or, as the documentation puts it:
ifelse returns a value with the same shape as test which is filled
with elements selected from either yes or no depending on whether the
element of test is TRUE or FALSE.
You probably simply wanted to use a regular if statement instead.
One potential confusion with ifelse is that it does recycle arguments. Specifically, if we do
ifelse(rnorm(10) < 0,-1,1)
you'll note that the first argument is a logical vector of length 10, but our second two "vectors" are both of length one. R will simply extend them as needed to match the length of the first argument. This will happen even if the lengths are not evenly extendable to the correct length.
|
September 14, 2009
Workshops
An Intro to APA workshop offered by Student Writing Support
Student Writing Support in the Center for Writing will be conducting two APA workshops for students. Here are the details:
An introduction to APA documentation & editorial style
This workshop will be offered
Tuesday, October 6, 2009
10:30 to 12:00 pm
Wilson Library, room S30A (West Bank) |
Micropapillary components in a lung adenocarcinoma predict stump recurrence 8 years after resection: a case report.
We report a rare case of lung adenocarcinoma in which micropapillary components were considered to cause stump recurrence. A woman in her fifties was diagnosed with lung cancer in the right middle lobe with invasion to the upper lobe, which was treated by a right middle lobectomy together with upper lobe partial resection. The cancer was pathologically diagnosed as adenocarcinoma and had a free surgical margin. There was no recurrence during the following 5 years and 8 months, and thus periodical surveillance, including computed tomography, was stopped. However, 2 years and 7 months after this, she was discovered to have an abnormal shadow on chest radiography, and a thorough examination revealed a 3-cm-sized tumor involving the previous surgical margin. Therefore, she underwent right upper lobectomy. We pathologically re-evaluated the first tumor and found that it was an adenocarcinoma with a micropapillary component in the periphery, 6mm away from the surgical margin. In addition, a few tiny clusters of tumor cells were found to be floating within the alveolar spaces near the margin. The first and second tumors showed almost the same histological mixture of components of adenocarcinoma and the same EGFR mutation. From these results, we concluded the second tumor was a stump recurrence originating from the first tumor resection. This case illustrates the importance of careful pathological investigation when an autosuture instrument is used for a partial resection in a case of lung adenocarcinoma with micropapillary components. In such cases, it is particularly important to clarify if micropapillary components are floating near a stump. |
/*Copyright (c) 2017 The Paradox Game Converters Project
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*/
#ifndef PROVINCE_MAPPER_H
#define PROVINCE_MAPPER_H
#include <map>
#include <memory>
#include <unordered_set>
#include <vector>
using namespace std;
class EU4Version;
class Object;
class provinceMapper
{
public:
static const vector<int> getVic2ProvinceNumbers(int EU4ProvinceNumber)
{
return getInstance()->GetVic2ProvinceNumbers(EU4ProvinceNumber);
}
static const vector<int> getEU4ProvinceNumbers(int Vic2ProvinceNumber)
{
return getInstance()->GetEU4ProvinceNumbers(Vic2ProvinceNumber);
}
static bool isProvinceResettable(int Vic2ProvinceNumber)
{
return getInstance()->IsProvinceResettable(Vic2ProvinceNumber);
}
private:
static provinceMapper* instance;
static provinceMapper* getInstance()
{
if (instance == NULL)
{
instance = new provinceMapper;
}
return instance;
}
provinceMapper();
void initProvinceMap(shared_ptr<Object> obj);
int getMappingsIndex(vector<shared_ptr<Object>> versions);
void createMappings(shared_ptr<Object> mapping);
const vector<int> GetVic2ProvinceNumbers(int EU4ProvinceNumber);
const vector<int> GetEU4ProvinceNumbers(int Vic2ProvinceNumber);
bool IsProvinceResettable(int Vic2ProvinceNumber);
map<int, vector<int>> Vic2ToEU4ProvinceMap;
map<int, vector<int>> EU4ToVic2ProvinceMap;
unordered_set<int> resettableProvinces;
};
#endif // PROVINCE_MAPPER_H |
U.S. Secretary of State Colin Powell says he thinks U.N. Security Council members are moving closer to an agreement on new resolution returning U.N. arms inspectors to Iraq. He is also signaling U.S. compromise on the key issue of what might trigger military action in the event of Iraqi non-compliance.
For weeks now, the Bush administration has insisted on a single resolution providing tough terms for new inspections and for military action if Iraq failed to comply.
Now however, Mr. Powell says the United States, while still favoring a one-resolution approach, will at least accept a reconvening of the Security Council to debate what to do in the face of Iraqi non-compliance.
Meeting reporters here after talks with German Foreign Minister Joschka Fischer, Mr. Powell said if the council failed at that time to authorize a military response, the United States would retain the right to organize its own coalition and move against Iraq.
"There is nothing that we propose in this resolution or that we would find acceptable in a resolution that would handcuff the president of the United States from doing what he feels he must do to defend the United States, to defend our people and defend our interests in the world," said Colin Powell. "But he is also anxious to pursue this matter through the United Nations. He's demonstrated that clearly with his speech on the 12 of September. I think we've demonstrated by the patience we have shown and by our willingness to listen to the view of others over the last six plus weeks. But at no time will the United States foreclose its ability to act in its interests.
The secretary has acknowledged it might take months for U.N. inspectors, once they return to Iraq, to do the work necessary to determine whether or not Saddam Hussein was complying with U.N. demands that it give up weapons of mass destruction.
However, he said this does not necessarily mean that the prospect military action against Iraq has been pushed off into the indefinite future, because early defiance would bring a quick response.
"Once the inspectors go in, it will take some time for them to do their job," he said. "They won't be able to do their job unless Iraq cooperates. And if there is immediate non-cooperation on the part of Iraq, that I think is an absolute red-line, and it has to come back to the council immediately. But it will take some time for the inspectors to do their job. But we have to see whether or not Iraq will cooperate and permit the inspectors to do their job. And during that period, obviously, in execution of such a resolution, the United States and all member nations of the Security Council of the United Nations will watch and see how the inspections are going."
Mr. Powell spent much of the day on the telephone with his counterparts from of other permanent Security Council members including French Foreign Minister Dominique de Villepin, whose government has been adamantly supporting a two-resolution process in the council.
The scretary predicted a conclusion of the U.N. debate by the end of next week, either with a single compromise resolution, or competing versions the Security Council would have to choose from.
Mr. Powell's late-afternoon meeting with Germany's Mr. Fischer was something of a reconciliation session, following German Chancellor Gerhard Schroeder's strong criticism of U.S. Iraq policy during his re-election campaign last month.
Mr. Fischer said the election hard feelings were nothing that the two longtime allies cannot overcome. "We have a lot of common issues to discuss," said Joschka Fischer. "We are close allies and I think that if there are differences and turbulences, we will discuss these problems inside the family. Let me stress and let me underline how important the relations between the United States, the relations to the United States, are for the Federal Republic of Germany. And we'll never forget what the United States has done to liberate us from Nazism, to help to build up the German democracy, to defend us during the Cold War, especially Berlin."
President Bush pointedly did not make a congratulatory phone call to Chancellor Schroeder after his center-left coalition's September 22 election win, but the two leaders are expected to meet during the NATO summit next month in Prague. |
---
abstract: 'Recent observation shows that the Higgs mass is at around 125 GeV while the prediction of the minimal supersymmetric standard model is below 120 GeV for stop mass lighter than 2 TeV unless the top squark has a maximal mixing. We consider the right-handed neutrino supermultiplets as messengers in addition to the usual gauge mediation to obtain sizeable tri-linear soft parameters $A_t$ needed for the maximal stop mixing. Neutrino messengers can explain the observed Higgs mass for stop mass around 1 TeV. Neutrino assistance can also generate charged lepton flavor violation including $\\mu \\to e \\gamma$ as a possible signature of the neutrino messengers. We consider $S_4$ discrete flavor model and show the relation of the charged lepton flavor violation, $\\theta_{13}$ of neutrino oscillation and muon $g-2$.'
author:
- 'Hyung Do Kim, Doh Young Mo, Min-Seok Seo'
title: '**Neutrino Assisted Gauge Mediation**'
---
Introduction {#sec:Introduction}
============
The observation of the Standard Model Higgs-like new boson with mass at around 125 GeV [@:2012gk; @:2012gu] changes the current understanding of new physics at the weak scale. The minimal supersymmetric standard model (MSSM) can explain 125 GeV with a relatively light stop of 1 to 2 TeV in the context of maximal stop mixing. From the model building point of view, it is quite difficult to realise the maximal stop mixing scenario starting from ultraviolet (UV) theory. In minimal gauge mediation (MGM) [@Dine:1993yw; @Dine:1994vc; @Dine:1995ag; @Giudice:1998bp], soft tri-linear A term is not generated at the messenger scale and the radiatively generated A term at the weak scale is not large enough to realise the maximal stop mixing. As a result, colored superpartners should be as heavy as 5 to 10 TeV to explain 125 GeV mass of the Higgs boson [@Ajaib:2012vc; @Feng:2012rn]. Therefore, the explanation of 125 GeV Higgs boson mass needs an extra help in minimal gauge mediation. Next to the minimal supersymmetric standard model (NMSSM) can use the extra contribution from the Yukawa F-term of the singlet. For instance, look at [@Bae:2012am]. Extra vector-like fermions are added in minimal gauge mediation [@Martin:2009bg; @Martin:2012dg; @Bae:2012ir]. Direct coupling of visible sector fields with messengers can help. Higgs-messenger mixing [@Kang:2012ra; @Craig:2012xp] and matter-messenger mixing [@Shadmi:2011hs; @Albaid:2012qk; @Abdullah:2012tq; @Evans:2012hg; @Evans:2011bea] can generate Yukawa mediated contribution including A term at the messenger scale. However, at the same time the virtue of gauge mediation is gone and they would spoil nice flavor preserving spectrum and can possibly cause the flavor problem at the weak scale. For the Higgs-messenger mixing, $A/m^2$ problem[@Kang:2012ra; @Craig:2012xp] which is analogous to $\\mu/B\\mu$ problem can arise and the electroweak symmetry breaking is difficult to achieve if the mixing coupling is large. General gauge mediation [@Buican:2008ws] can avoid this problem by using the mechanism of radiatively generated maximal stop mixing [@Dermisek:2006ey; @Dermisek:2006qj].
In this paper we consider the right-handed neutrino supermultiplets as the messengers of supersymmetry breaking in addition to the messengers charged under the Standard Model (SM) gauge group, e.g. ${\\bf 5}$ and ${\\bf \\bar{5}}$ of $SU(5)$. The setup is motivated from [@Choi:2011rs] which provides a solution to $\\mu$ problem in gauge mediation (more precisely $\\mu/B\\mu$ problem) [@Dvali:1996cu; @Giudice:2007ca]. For the solution in [@Choi:2011rs] to work, the messenger scale should be higher than the Peccei-Quinn breaking scale, $10^9 \\sim 10^{11}$ GeV. For the gauge mediation to be the dominant contribution compared to the Planck suppressed higher dimensional contribution, the messenger scale should be lower than $10^{15}$ GeV. Therefore, the See-Saw scale with order one neutrino Dirac Yukawa couplings, $10^{13} \\sim 10^{14}$ GeV, is well motivated as the messenger scale if we accept [@Choi:2011rs] as a solution of the $\\mu$ problem in gauge mediation. If the messenger scale is at around the See-Saw scale, the natural question is why the right-handed neutrino supermultiplets do not serve as messengers of supersymmetry breaking. Apparently there is no harm to couple the right-handed neutrino superfields directly to the messengers. Majorana mass of the right-handed neutrino and the messenger mass of ordinary ${\\bf 5}$ and ${\\bf \\bar{5}}$ might have the same origin in this case. In summary, the minimal set of messengers are ${\\bf 5}$, ${\\bf \\bar{5}}$ and ${\\bf 1}$. This is different from previous studies relating gauge mediation and See-Saw mechanism [@Joaquim:2006uz; @Mohapatra:2008wx; @FileviezPerez:2009im]. They employ particles relevant to See-Saw mechanism as messengers, and these particles are also charged under the SM gauge group, which can be seen in the Type-II or Type-III See-Saw. Therefore, gauge mediation and neutrino Dirac Yukawa mediation have a common messenger. In our case, in contrast, neutrino Dirac Yukawa messenger is the right-handed neutrino, the SM singlet.
If the right-handed neutrinos couple to the supersymmetry breaking field, neutrino Dirac Yukawa coupling generates the A term and soft scalar mass of lepton doublet and up-type Higgs at the See-Saw scale after integrating out the neutrino messengers. 125 GeV Higgs mass can be explained with stop lighter than 2 TeV in this setup. At the same time the stop mass gets an extra Yukawa mediation and maximal stop mixing can be easily realised.
As there is a neutrino Dirac Yukawa contribution to the soft parameters in addition to the ordinary gauge mediation, interesting new physics signature is expected. The mechanism of the charged lepton flavor violation is different from that in mSUGRA [@Borzumati:1986qx] or SUSY GUT [@Ciuchini:2007ha] in which the origin is the running of soft parameters above the See-Saw scale. Though the origin is different, the spectrum looks similar. The crucial difference is that here the flavor violation appearing in lepton doublet soft scalar mass is $16\\pi^2$ bigger than the one in mSUGRA. Therefore, the naive expectation is that order one neutrino Dirac Yukawa coupling would be incompatible with the current bounds of various charged lepton flavor violation constraints including $\\mu \\to e \\gamma$.
The computation of the charged lepton flavor violation needs a complete flavor model. Current observation of the charged lepton mass and lepton mixing matrix (PMNS) can be explained in a consistent way with the neutrino Dirac Yukawa matrix which is proportional to the identity matrix. This is not an ad hoc assumption but can be explained in the context of non-Abelian discrete flavor symmetry, e.g., tribimaximal PMNS[@Harrison:2002er] from $S_4$. Therefore, order one neutrino Dirac Yukawa coupling can generate order one A term at the messenger scale and at the same time can be consistent with the charged lepton flavor violation constraints as long as it is proportional to the identity matrix.
$S_4$ flavor symmetry is the most natural and/or simple if $\\theta_{13} =0$ as the tribimaximal mixing can be nicely realised. However, small but sizeable $\\theta_{13}$ ($\\sin \\theta_{13} \\sim 0.15$) can be accommodated with the extra complication[@Lin:2009bw; @Ishimori:2012fg; @Altarelli:2012bn; @King:2012vj]. If the origin of $\\theta_{13}$ is the modification of Majorana mass of the right-handed neutrino, there would be no off-diagonal element in the lepton doublet soft scalar masses as the neutrino Dirac Yukawa would be still proportional to the identity. In this case the model is free from the cLFV constraints. Nevertheless, the sparticle spectrum needed to explain the observed Higgs mass is heavy enough such that it is hard to explain the muon anomalous magnetic moment at the same time. If $\\theta_{13}$ is due to the deviation of the neutrino Dirac Yukawa matrix from the identity, sizeable charged lepton flavor violation is expected. We compute the charged lepton flavor violating processes in both cases and show that interesting parameter space exists if $\\theta_{13}$ is a combination of two contributions from neutrino Dirac Yukawa and Majorana mass matrix.
The contents of the paper is following. In section 2, we explain the setup for neutrino assisted gauge mediation in which the right-handed neutrino is added as messengers in addition to the ordinary SM charged messengers. Also we discuss the implication for the Higgs mass. In section 3, we explain our $S_4$ flavor model as a representative example to discuss possible phenomenological implication. In section 4, we discuss charged lepton flavor violation in connection with muon anomalous magnetic moment, the neutrino mixing angle $\\theta_{13}$ and the Higgs mass. Then we conclude.
Neutrino Assisted Gauge Mediation and The Higgs Mass
====================================================
Soft terms generated from right-handed neutrino messengers {#sec:softterms}
----------------------------------------------------------
The extremely small masses of neutrinos can be explained through the See-Saw mechanism[@Minkowski:1977sc; @Yanagida:1979as; @Yanagida:1980xy; @GellMann:1980vs; @Mohapatra:1979ia], in which lepton number is violated at around the Grand Unified Theory (GUT) scale. In this paper, we consider the simplest model, type-I See-Saw. For this, we extend the MSSM superpotential by including right-handed Majorana neutrinos, [$$\\begin{split}W&=\\epsilon_{ab}\\Big[(Y_U)_{ij}\\bar{U}_iQ^a_jH_u^b-(Y_D)_{ij}\\bar{D}_iQ^a_jH_d^b
-(Y_E)_{ij}\\bar{E}_iL^a_jH_d^b+(Y_\\nu)_{ij} {N}_iL^a_jH_u^b
\\\\
& + \\mu H_u^a H_d^b\\Big] + \\frac{1}{2} M_N^{ij} {N}_i {N}_j ,\\end{split}$$]{} where $\\epsilon_{ab}$ is a totally antisymmetric tensor with $\\epsilon_{12}=1$. The superfields in the superpotential represent right-handed neutrino-sneutrino pairs, in addition to the SM particles and their superpartners. They have the following SM gauge group SU(3)$_c \\times$SU(2)$_L \\times$U(1)$_Y$ quantum numbers: [$$\\begin{split}&Q: (3,2,\\frac16),~~\\bar{U}: (\\bar{3}, 1, -\\frac23),~~\\bar{D}: (\\bar{3}, 1, \\frac13)
\\\\
& L : (1, 2, -\\frac12),~~\\bar{E}: (1,1,1),~~N: (1,0,0)
\\\\
&H_u: (1,2,\\frac12),~~H_d: (1,2,-\\frac12).\\end{split}$$]{} Relative minus signs of Yukawa terms are given to make the sign of terms responsible for the fermion Dirac masses to be the same.
The relevant soft supersymmetry (SUSY) breaking terms are given by [$$\\begin{split}\\mathcal{L}_{\\mathrm{soft}} =& - (m_{N}^2)^i_j \\tilde{N}^\\dagger_i \\tilde{N}_j - (m_L^2)^j_i \\tilde{L}^{\\dagger i} \\tilde{L}_j - m_{H_u}^2 H_u^{\\dagger} H_u \\\\-& \\Big[ \\frac{1}{2} (B_N M)^{ij} \\tilde{N}_i \\tilde{N}_j + (\\tilde{A_U})_{ij} \\tilde{U}^{i} \\tilde{Q}^{j} H_u -(\\tilde{ A_D})_{ij} \\tilde{D}^{i}\\tilde{Q}^{j} H_d -(\\tilde{A_E})_{ij} \\tilde{E}^i\\tilde{L}^j H_d +B\\mu H_u H_d + h.c. \\Big] .\\end{split}$$]{}
We consider two origins of soft terms. The first one is gauge mediation. In the gauge mediation, sfermions obtain soft masses given by [@Giudice:1998bp] [$$\\begin{split}m_{\\tilde{f}}^2=4 \\sum_a\\Big(\\frac{g_a^2}{16\\pi^2}\\Big)^2 C_a \\sum_i \\Big(\\frac{F}{M_i}\\Big)^2\\ T_a({\\cal R}_i) f(x_i)\\end{split}$$]{} at the messenger scales $M_i$, where $C_a$ is the quadratic Casmir $\\sum_\\alpha T^\\alpha T^\\alpha$ of the sfermion representation ${\\cal R}_i$ under the corresponding gauge group labeled by $a$, which is given by $(N^2-1)/(2N)$ for SU(N) and $Y^2$ for U(1)$_Y$, $T_a$ is defined by ${\\rm Tr}T^\\alpha T^\\beta = T_a({\\cal R}_i)\\delta^{\\alpha \\beta}$, and $f(x_i)$ is the loop function of $x_i=F/M_i^2$ which is close to one for small $x_i$. On the other hand, the 2-loop tri-linear A term is very small and can be neglected at the messenger scale.
For the second origin of soft terms, we introduce a SUSY breaking spurion $X$ which couples to the right-handed neutrinos. Majorana mass of the right-handed neutrino comes from the scalar vacuum expectation value (VEV) of the SUSY breaking spurion $X$, [$$\\begin{split} W \\supset \\lambda X N N.\\end{split}$$]{} Then $N$ acts as the messengers of supersymmetry breaking, and the neutrino Dirac Yukawa coupling, [$$\\begin{split}W \\supset Y_\\nu N L H_u,\\end{split}$$]{} is interpreted as the direct mixing term among the messengers, Higgs and matter (leptons).
The SUSY breaking effects at the See-Saw scale $M_N=\\lambda \\langle X \\rangle$ is studied in [@Giudice:2010zn]. When right-handed neutrinos couple to the SUSY breaking sector, Majorana mass matrix is analytically continued to be $M_N \\rightarrow (1+ \\theta^2 B_N)M_N$, as in the case of gauge mediation[@Giudice:1997ni; @ArkaniHamed:1998kj; @Chacko:2001km]. Here, we assume that the flavor structure of the right-handed neutrinos is fully determined by $M_N$, so $B_N=F_X/X$ is a constant.
Then, SUSY breaking is transferred to the visible sector through the neutrino Dirac Yukawa interaction. Wave function renormalization from the interaction with right-handed neutrinos is given by [$$\\begin{split} \\delta Z_L = \\frac{Y_{\\nu}^{R \\dagger} }{16 \\pi^2} \\Big( 1- \\ln \\frac{M^{R \\dagger}M^R}{\\Lambda^2} \\Big) Y_{\\nu}^{R}, ~~~~~ \\delta Z_{H_u}= \\mathrm{Tr} \\delta Z_L \\label{eq:wavere}\\end{split}$$]{} where [$$\\begin{split} \\lambda_N^R = [Z_N^{-1/2}]^T \\lambda_N Z_L^{-1/2 } Z_{H_u}^{-1/2},~~~~~ M^R = [Z_N^{-1/2 }]^T M_N Z_N^{-1/2},\\end{split}$$]{} then analytically continued Majorana masses give the soft masses. From field redefinitions [$$\\begin{split}& L \\rightarrow \\big( 1- \\frac{\\delta Z_L \\vert_0}{2} \\big)(1- \\theta^2 \\delta Z_L \\vert_{\\theta^2})L
\\\\
& H_u \\rightarrow \\big( 1- \\frac{\\delta Z_{H_u} \\vert_0}{2} \\big)(1- \\theta^2 \\delta Z_{H_u} \\vert_{\\theta^2})H_u,\\end{split}$$]{} supersymmetric kinetic terms can be written in the simple form, [$$\\begin{split}\\Phi^{\\dagger}(1+ \\delta Z_{\\Phi}) \\Phi \\rightarrow \\Phi^{\\dagger}(1+ \\theta^2 \\bar{\\theta}^2 \\delta Z_{\\Phi} \\vert_{\\theta^2 \\bar{\\theta}^2}) \\Phi \\end{split}$$]{} then we can read off the one-loop corrections to the soft masses [$$\\begin{split} \\delta m_L^2 = - \\delta Z_L \\vert _{\\theta^2 \\bar{\\theta}^2}~~~ \\mathrm{and}~~~ \\delta m_{H_u}^2 = - \\delta Z_{H_u} \\vert _{\\theta^2 \\bar{\\theta}^2} \\label{eq:smass}.\\end{split}$$]{} In the expression, $B_N$ is just a constant, not a matrix. So $\\ln(M_N^\\dagger M_N)$ in the wave function renormalization is separated into holomorphic and anti-holomorphic parts, respectively. Since $\\theta^2\\bar{\\theta}^2$ term is not generated, we do not have one-loop soft masses.
Hence, as in minimal gauge mediation, soft masses are generated at two loop level. In [@Kang:2012ra], it was shown that soft scalar masses of the fields which directly couple to messengers and those which do not are different. In our model, the slepton $\\tilde{L}$ and the up-type Higgs $H_u$ couple to messengers $N$ directly to give soft terms,
[$$\\begin{split}&\\delta m_{L}^2 = \\frac{B_N^2}{(4 \\pi)^4} \\Big[ \\Big({\\rm Tr} [Y_{\\nu} Y_{\\nu}^\\dagger]+3{\\rm Tr} [Y_U Y_U^\\dagger] - 3 g_2^2 - \\frac{1}{5} g_1^2 \\Big) Y_{\\nu}^\\dagger Y_{\\nu} + 3 Y_{\\nu}^\\dagger Y_{\\nu} Y_{\\nu}^\\dagger Y_{\\nu} \\Big]
\\\\
&\\delta m_{H_u}^2 = \\frac{B_N^2}{(4 \\pi)^4} \\Big[ 4{\\rm Tr} [Y_{\\nu} Y_{\\nu}^\\dagger Y_{\\nu}^\\dagger Y_{\\nu}] -\\Big( 3 g_2^2 + \\frac{1}{5} g_1^2 \\Big){\\rm Tr} [Y_\\nu Y_\\nu^\\dagger] \\Big]. \\label{eq:mlsqaure}\\end{split}$$]{}
On the other hand, $\\tilde{Q}$ and $\\tilde{U}$ obtain two-loop soft scalar masses through the wave function renormalization of $H_u$ and the corrections are given by [$$\\begin{split}&\\delta m_{Q}^2=-\\frac{B_N^2}{(4 \\pi)^4} {\\rm Tr}[Y_\\nu Y_\\nu^\\dagger]Y_U^\\dagger Y_U
\\\\
&\\delta m_{U}^2=-\\frac{B_N^2}{(4 \\pi)^4} {\\rm Tr}[Y_\\nu Y_\\nu^\\dagger]Y_U Y_U^\\dagger\\end{split}$$]{} while the soft masses of $\\tilde{E}$ and $H_d$ come out of the wave function renormalization of $L$ and the corrections are given by [$$\\begin{split}
&\\delta m_{E}^2=-\\frac{B_N^2}{(4 \\pi)^4} Y_EY_\\nu^\\dagger Y_\\nu Y_E^\\dagger
\\\\
&\\delta m_{H_d}^2=-\\frac{B_N^2}{(4 \\pi)^4} {\\rm Tr}[Y_EY_\\nu^\\dagger Y_\\nu Y_E^\\dagger].\\end{split}$$]{}
By replacing $Y_E \\to Y_E(1+\\delta A_E)$, $Y_U \\to Y_U(1+\\delta A_U)$, and $Y_D \\to Y_D(1+\\delta A_D)$, we have following soft terms at one loop level, [$$\\begin{split}&\\delta A_E = -\\delta Z_L \\vert_{\\theta^2},~~~\\delta A_U = - \\mathbb{I} \\delta Z_{H_u} \\vert_{\\theta^2},
\\\\
&\\delta A_D = 0,~~~\\delta B = - \\delta Z_{H_u} \\vert_{\\theta^2}.\\label{eq:msoft}\\end{split}$$]{} Unlike gauge mediation, right-handed neutrino mediation generates one-loop $A-$terms, [$$\\begin{split}&A_E = \\frac{B_N}{16 \\pi^2 } Y_{\\nu}^\\dagger Y_{\\nu}
\\\\
&A_U = - {\\rm Tr} A_E \\times \\mathbb{I}_{3 \\times 3}
\\\\
&B = {\\rm Tr} A_E.\\end{split}$$]{}
While gauge mediation contributions are flavor universal, See-Saw Yukawa mediation is flavor dependent and one of the virtue of the gauge mediaion would disappear. In the absence of See-Saw Yukawa mediation, cLFV can appear when the messenger scale is higher than the right-handed neutrino Majorana mass scale. See-Saw Yukawa contributes to the slepton soft mass through the renormalization group equation (RGE) , [$$\\begin{split}\\mu\\frac{d}{d\\mu}m_L^2=\\mu\\frac{d}{d\\mu}m_L^2\\Big|_{\\rm MGM}+\\frac{1}{16\\pi^2}\\Big[(m_L^2 Y_\\nu^\\dagger Y_\\nu+Y_\\nu^\\dagger Y_\\nu m_L^2)+2(Y_\\nu^\\dagger m_N^2 Y_\\nu +m_{H_u}^2Y_\\nu^\\dagger Y_\\nu +\\tilde{A}_\\nu^\\dagger \\tilde{A}_\\nu)\\Big]\\end{split}$$]{} which should be restricted by cLFV constraints [@Grossman:2011fz]. Here $\\tilde{A}_\\nu=A_\\nu Y_\\nu$ is used. Since $m_L^2$ is two-loop generated, cLFV effects are further loop suppressed (at three loop level). Unlike mSUGRA, this effect is known to be small in gauge mediation as the messenger scale is at most comparable to the See-Saw scale and the running can be made in a very short interval. This is not the cLFV that we are interested in.
In neutrino assisted gauge mediation, neutrino Dirac Yukawa couplings can introduce two-loop generated cLFV effects on $m_L^2$ as a result of gauge-Yukawa or Yukawa mediation, [$$\\begin{split}\\delta m_{L}^2 = \\frac{B_N^2}{(4 \\pi)^4} \\Big[ \\Big({\\rm Tr} [Y_{\\nu} Y_{\\nu}^\\dagger]+3{\\rm Tr} [Y_U Y_U^\\dagger] - 3 g_2^2 - \\frac{1}{5} g_1^2 \\Big) Y_{\\nu}^\\dagger Y_{\\nu} + 3 Y_{\\nu}^\\dagger Y_{\\nu} Y_{\\nu}^\\dagger Y_{\\nu} \\Big]\\end{split}$$]{} in the charged lepton mass basis. If the two loop generated slepton mass squared has a nonzero off-diagonal element, it would generate cLFV. Parametrically, this effect is much larger than the expected cLFV in mSUGRA or similar scenarios in which the effect comes from the running above the See-Saw scale. We simply assume that both messengers ${\\bf 5}, {\\bf \\bar{5}}$ and ${\\bf 1}$ have the same masses at the See-Saw scale. In principle these two masses can be different and cLFV can arise if the singlet messenger is lighter than ${\\bf 5}, {\\bf \\bar{5}}$. However, this effect is loop suppressed compared to the Yukawa mediation we would not consider it in this paper.
Further discussion on cLFV is possible only when there is an explicit flavor model providing the neutrino Dirac Yukawa and charged lepton Yukawa matrices. As a simple and illustrative example of the explicit model, we consider $S_4$ flavor symmetry in Sec. \\[sec:FlavorModel\\]. It will be shown that various types of See-Saw Yukawa $Y_\\nu$ would predict different sizes of effects on cLFV. Before moving onto the flavor discussion, let us consider the implication on the Higgs mass first.
Higgs mass and superparticle spectrum {#sec:higgs}
-------------------------------------
![Higgs mass with respect to $X_t$ for $\\tan\\beta=10$, stop mass $M_{\\tilde{t}} \\sim 2 {\\,\\textrm{TeV}}$. []{data-label="fig:xHiggs"}](Xthiggsmass.eps){width="45.00000%"}
Minimal gauge mediation does not generate $A_t$ at one loop and the weak scale $A_t$ is radiatively generated by the gluino loop. However, the same gluino contribution appears in stop soft scalar mass and the relative ratio of $|A_t|$ and $m_{\\tilde{t}}$ can not be large. On the other hand, the physical light CP even Higgs mass in the MSSM is affected by $\\hat{X_t} \\equiv (A_t -\\mu /\\tan \\beta)/m_{\\tilde{t}}$ and $\\hat{X_t} \\sim 2$ (or $\\sqrt{6}$ more precisely) gives the maximum finite threshold correction as shown in Fig. \\[fig:xHiggs\\].
One way to make $|\\hat{X_t}| > 1$ at the weak scale is to start from tachyonic stop boundary condition [@Dermisek:2006ey] which is explicitly realised in gauge messenger model [@Dermisek:2006qj]. However, this option is not available in minimal gauge mediation. The other possibility is to couple messengers directly to the visible sector fields such that large negative $A$ term can be generated at the messenger scale. If $A$ term is positive, the gluino contribution from the running cancels the $A$ term at the messenger scale. Matter-messenger mixing [@Shadmi:2011hs; @Albaid:2012qk; @Abdullah:2012tq; @Evans:2012hg; @Evans:2011bea] also has been considered recently. Messenger-matter-matter Yukawa coupling would generate the needed $A_t$ term at the messenger scale. However, the full Yukawa couplings are written as $3\\times 3$ matrices and why all other dangerous Yukawa couplings between matters and messengers are absent except $33$ component remains to be a puzzle. One way out is to consider Higgs-messenger mass mixing [@Chacko:2001km] and to generate all the wanted Yukawa couplings between matter and messengers from ordinary Yukawa couplings of matter with Higgs. There would be a direct modification of squark spectrum if squark couples directly to the messenger.
![ Phase diagrams indicating stop tachyonic and no EWSB region for $\\tan \\beta = 10$, $\\tan \\beta = 30$, respectively. $B_N$ is set to be $5 \\times 10^5 {\\,\\textrm{GeV}}$. []{data-label="fig:phase"}](phase10.eps "fig:"){width="70.00000%"} ![ Phase diagrams indicating stop tachyonic and no EWSB region for $\\tan \\beta = 10$, $\\tan \\beta = 30$, respectively. $B_N$ is set to be $5 \\times 10^5 {\\,\\textrm{GeV}}$. []{data-label="fig:phase"}](phase30.eps "fig:"){width="70.00000%"}
Higgs-messenger mixing through Higgs-messenger-messenger coupling or Higgs-Higgs-messenger coupling has been considered in [@Kang:2012ra; @Craig:2012xp]. In this case, we often encounter $A/m^2$ problem. To understand this, it is worth to emphasize that, the two-loop soft mass squared of the Higgs field $H_u$ which has direct coupling to messenger $\\Phi$ has a structure of $m_{H_u}^2 \\sim c \\lambda^4-c^\\prime \\lambda^2 g^2$ where $\\lambda$ is a coupling constant of Higgs and messenger fields and $g$ is the gauge coupling(s). On the other hand, the two-loop soft mass squared of fields $Q,\\bar{U}$ which does not have a direct coupling with messenger has a form of $m_{Q_3,\\bar{U}_3}^2 \\sim -c_3\\lambda^2 y_t^2$. This fact was extensively studied in [@Kang:2012ra]. For sufficiently large $\\lambda$, large one-loop A terms are generated. At the same time, $m_{H_u}^2$ becomes positive so the soft mass of $H_u$ can be much larger than that in the pure gauge mediation. Moreover, the soft mass of $Q_3,\\bar{U}_3$ can be much smaller. If the Higgs $H_u$ superfield directly couples to messengers whereas the top superfields do not, relatively light stop in natural SUSY can be easily obtained as we can have the small stop soft mass from the effect explained above and the large LR mixing from large A term. $A/m^2$ problem appears in $H_u$ soft terms such that large $A$ term at the same time generate large $m_{H_u}^2$ at the messenger scale and it can make the electroweak symmetry breaking difficult. It is analogous to the famous $\\mu/B\\mu$ problem in gauge mediation. To avoid this but to allow the large $\\lambda$ for maximal mixing, large $-c_2 \\lambda^2 g^2$ part in $m_{H_u}^2$ is required. This can be achieved by introducing new gauge bosons or making strong interaction involved [@Kang:2012ra]. On the other hand, one loop, negative contribution to $m_{H_u}^2$ can be considered if the messenger scale is low as analysed in detail in [@Craig:2012xp].
![ Higgs mass as a function of $y_{\\nu}$ for $B_N = 5 \\times 10^{5} {\\,\\textrm{GeV}}$, $\\rho = 0.1$. Higgs mass can be achieved with the help of Yukawa mediation for large $\\tan \\beta$ region. At $y_{\\nu} \\sim 0.7$, stop mass is approximately $1 {\\,\\textrm{TeV}}$. []{data-label="fig:Higgsmass"}](YHiggsMass.eps){width="70.00000%"}
Neutrino assisted gauge mediation uses the Yukawa coupling among messengers (neutrinos), Higgs and lepton doublets. As a result, Higgs and lepton doublet soft scalar masses get extra contribution from Yukawa mediation. The same $A/m^2$ problem applies here and neutrino Dirac Yukawa coupling can not be taken to be a large value for successful electroweak symmetry breaking in principle. On the other hand, too large $m_{H_u}^2$ and too large A term may drive stop tachyonic through renormalization group running with top Yukawa. The problem becomes worse as the stop soft scalar mass squared at the messenger scale gets a negative contribution from Yukawa mediation. The situation is shown in Fig. \\[fig:phase\\]. For the running mass of the top quark 160 GeV (the central value), the tachyonic stop appears before the real $A/m^2$ problem prevents the successful electroweak symmetry breaking as we increase $y_\\nu$. The crucial difference compared to the previous work in which $A/m^2$ problem is emphasized [@Kang:2012ra; @Craig:2012xp] comes from the number of messengers. In neutrino assisted gauge mediation, the number of messengers is three, $N=3$. The $y^2$ contribution is effectively replaced by $N y_\\nu^2$. Large $N$ effectively reduces the $A/m^2$ problem by $1/N$. At the same time smaller $y_\\nu$ can provide the same impact with the aid of $N > 1$. If tachyonic stop appears as $y_\\nu$ gets larger, it would be easy to realise the maximal stop mixing by making the stop soft scalar mass sufficiently small.
![ $X_t / M_{\\tilde t}$ as a function of $y_{\\nu}$ for $B_N = 5 \\times 10^{5} {\\,\\textrm{GeV}}$, $\\rho = 0.1$, $\\tan \\beta = 10$. []{data-label="fig:XtMt"}](LambdaMt.eps){width="50.00000%"}
![ $X_t / M_{\\tilde t}$ as a function of $y_{\\nu}$ for $B_N = 5 \\times 10^{5} {\\,\\textrm{GeV}}$, $\\rho = 0.1$, $\\tan \\beta = 10$. []{data-label="fig:XtMt"}](LambdaXtMt.eps){width="50.00000%"}
Fig. \\[fig:Higgsmass\\] shows the contribution assisted by neutrino messengers, compared to the minimal gauge mediation which corresponds to $y_\\nu=0$ with stop mass at around 1 TeV. In the minimal gauge mediation, the Higgs mass is computed to be at around $121 \\sim 122$ GeV for $\\tan \\beta = 10 \\sim 30$. For $y_\\nu = 0.7$, the Higgs mass can be as large as $125 \\sim 126$ GeV. 4 to 5 GeV gain in the Higgs mass is obtained in neutrino assisted gauge mediation. The gain does not look impressive but has an impact on allowed superparticle spectrum. In the absence of $A_t$ at the messenger scale as is the case in minimal gauge mediation, this extra 5 to 6 GeV can be achieved by making the logarithmic contribution large and the stop mass should be as heavy as 5 to 10 TeV rather than 2 TeV.
Note that the plot stops at $y_\\nu=0.72$. Neutrino assisted gauge mediation is classified as Higgs-messenger mixing scenario as the right-handed neutrino is the messenger and the neutrino Dirac Yukawa coupling connects Higgs, lepton doublet and the messenger (right-handed neutrino). The stop soft scalar mass squared gets smaller and becomes tachyonic as the neutrino Dirac Yukawa coupling is increased as in Fig. \\[fig:stopmass\\]. The logarithmic correction to the Higgs mass also rapidly drops beyond $y_\\nu \\sim 0.7$ as the stop mass becomes too light (and becomes tachyonic) as is shown in Fig. \\[fig:Higgsmass\\]. The maximal mixing is realised around this point, as shown in Fig. \\[fig:XtMt\\]. This also corresponds to the corner of the parameter space next to the critical point as in [@Giudice:2006sn].
![ $\\tilde{A}_t \\equiv A_t Y_t$ as a function of $y_\\nu$ for $\\tan \\beta = 10$, $B_N = 5 \\times 10^5$GeV. Without Yukawa mediation, one can obtain $\\tilde{A}_{t} \\sim -2700{\\,\\textrm{GeV}}$ at the weak scale by RG running effects. With help of neutrino mediation at the messenger scale, one can obtain $\\tilde{A}_{t} \\sim -4500{\\,\\textrm{GeV}}$ at weak scale. This drives more stop mixing, which helps $125 {\\,\\textrm{GeV}}$ Higgs mass. []{data-label="fig:At"}](lambdaA.eps){width="60.00000%"}
Fig. \\[fig:At\\] compares $A_t$ in the minimal gauge mediation and the neutrino assisted gauge mediation both at the messenger scale and the weak scale. Note that $A_t$ by itself is enhanced by 1.5 at the weak scale with the help of messenger scale $A_t$.
![Maximum values of Higgs mass as a function of $B_N$. For $\\tan \\beta = 10$, at least $B_N = 360 {\\,\\textrm{TeV}}$ is required to obtain $125{\\,\\textrm{GeV}}$ Higgs mass. For $\\tan \\beta = 30$, $B_N = 300 {\\,\\textrm{TeV}}$ is required. At two points $( \\tan \\beta = 10 , B_N = 360 {\\,\\textrm{TeV}})$, $( \\tan \\beta = 30 , B_N = 300 {\\,\\textrm{TeV}})$, we display sparticle spectrums in Table \\[table:Spectrum125\\]. Also spectrums with $123 {\\,\\textrm{GeV}}$ Higgs mass are given for $(\\tan \\beta = 10, B_N = 240 {\\,\\textrm{TeV}})$, $(\\tan \\beta = 30 , B_N = 200 {\\,\\textrm{TeV}})$. Sparticle spectrums are displayed in Table \\[table:Spectrum123\\].[]{data-label="fig:BNHiggs"}](maxhiggs.eps){width="80.00000%"}
Fig. \\[fig:BNHiggs\\] shows the relation between $B_N$ and the Higgs mass. The neutrino Dirac Yukawa coupling $y_\\nu$ is chosen to be close to $0.72$ which can maximize the Higgs mass for given $B_N$.
In summary, the minimal gauge mediation needs stop mass at around 5 to 10 TeV to raise the Higgs mass up to 125 GeV. If the right-handed neutrinos are the messengers of the supersymmetry breaking, so called ‘neutrino assisted gauge mediation’, we can explain 125 GeV Higgs mass with lighter than 2 TeV stop mass.
Flavor Model {#sec:FlavorModel}
=============
In this section, we consider models which can explain neutrino oscillations successfully. Since the SUSY breaking mediation through the neutrino Dirac Yukawa coupling is flavor dependent in general, sizable cLFV could be generated. To avoid this, the neutrino Dirac Yukawa coupling is set to be proportional to the idenitity. In the right-handed neutrino mass basis, it would be proportional to the unitary matrix so soft mass $m_L^2$, which depends on the combination $Y_\\nu^\\dagger Y_\\nu$ is flavor universal. It is easily achieved by employing the non-abelian discrete symmetry for the tri-bi maximal mixing of the PMNS matrix. Since the tri-bi maximal mixing should be modified to make $\\theta_{13}$ nonzero, as reported by several observations[@Abe:2011fz; @Hartz:2012np; @Adamson:2012rm; @An:2012eh; @Ahn:2012nd], small corrections should be added. When the neutrino Dirac Yukawa coupling has such corrections, such that it has a deviation from identity, cLFV is generated. We look for several ways to suppress cLFV, at least under the experimental bound.
--------------------- -------------------- ------- ---------- ----------
Superfield $S_4$ $Z_4$ U(1)$_L$ U(1)$_R$
\\[0.2em\\]
\\[-1.1em\\] $L$ ${\\bf 3}$ 1 1 1
\\[0.4em\\] $\\bar{E}$ ${\\bf 2+1}$ 2 -1 0
\\[0.4em\\] $N$ ${\\bf 3}$ 3 -1 0
\\[0.4em\\] $\\Phi$ ${\\bf 3+3^\\prime}$ 1 0 0
\\[0.4em\\] $\\chi$ ${\\bf 1+2+3}$ 2 2 0
\\[0.4em\\] $H_u$ ${\\bf 1}$ 0 0 1
\\[0.4em\\] $H_d$ ${\\bf 1}$ 0 0 1
\\[0.4em\\] $X$ ${\\bf 1}$ 0 0 2
\\[0.4em\\]
--------------------- -------------------- ------- ---------- ----------
: Charge assignments under $S_4 \\times Z_4 \\times {\\rm U(1)}_L \\times {\\rm U(1)}_R $ for leptons, flavons, Higgs, and SUSY breaking spurions. []{data-label="table:charges"}
To make the PMNS matrix tri-bi maximal, we use $S_4$ discrete symmetry, since it is closely related to the permutation structure of Yukawa couplings. Other discrete symmetries, such as $A_4$, the even permutation of the $S_4$ could be used. The main difference is that the first and the second generation of the right-handed leptons belong to ${\\bf 2}$ dimensional representation in $S_4$ while they correspond to different one dimensional representations, ${\\bf 1^\\prime}$, ${\\bf 1^{\\prime\\prime}}$ in $A_4$. In [@He:2006dk; @He:2006qd; @He:2011gb; @BenTov:2012tg], the structure we use is obtained from $A_4$ symmetry and discussion on the deviation from the tri-bi maximal mixing is in parallel. The $S_4$ symmetry model building is reviewed in [@Bazzocchi:2012st]. In Appendix A, we summarised representations and tensor products of $S_4$ group.
For quark sector, the CKM matrix is close to the identity. Deviation from the identity has a hierarchy structure parametrized by some powers of the Cabibbo angle, $\\lambda=\\sin\\theta_C$. On the other hand, the PMNS matrix, mixing matrix in the lepton sector has large mixing angles. Even the smallest mixing angle, $\\theta_{13}$ is in the order of $\\lambda$. To explain this, it is natural to assume that $u-$ and $d-$ quark sectors have almost the same structure under the discrete flavor symmetry whereas the charged lepton and the right-handed neutrino sectors do not. This picture can be realised by introducing appropriate ‘flavons’ charged under discrete symmetry group and more symmetries can be introduced to forbid useless couplings. Here, we consider the symmetry group $S_4 \\times Z_4 \\times {\\rm U(1)}_L$, where U(1)$_L$ represents a lepton number, which may be discretized. In this paper, we consider superpotential for See-Saw mechanism with flavons $\\Phi$ and $\\chi$, [$$\\begin{split}W= -\\l_{1ij} \\bar{E}_i \\Phi L_j H_d + \\l_{2ij} N_i L_j H_u+\\frac12 \\l_{3ij} X N_i \\chi N_j,\\end{split}$$]{} where $i,j=1,2,3$ are the generation indices and $X$ is a SUSY breaking spurion. For this, $S_4$, $Z_4$, U(1)$_L$ and U(1) R-symmetry quantum numbers are given in Table \\[table:charges\\].
The charged lepton Yukawa couplings can be constructed from $\\bar{E}\\Phi L H_d$, the neutrino Dirac Yukawa coupling can be constructed from $NL$, and the Majorana mass of the heavy neutrinos can be constructed from $X N\\chi N$. On the other hand, $\\Phi^2$, $\\chi^2$, and $\\Phi\\chi$ cannot couple to the combinations $\\bar{E} L H_d$, $NL$, and $XNN$ to make singlets. Note that U(1)$_R$ is introduced to forbid unwanted coupling $ N \\chi N$, which makes $B_N$ in a matrix form, not a constant.
The discrete symmetry quantum number can be extended to the quark sector, such as $Q: ({\\bf 3},1,1, 1, 1)$, $\\bar{U} : ({\\bf 2+1},2,0, 0)$, and $\\bar{D} : ({\\bf 2+1},2,0, 0)$ under $S_4\\times Z_4\\times {\\rm U(1)}_L \\times$U(1)$_R$. The flavons $\\Phi : ({\\bf 3+3^\\prime}, 1,0)$ make the singlet combinations ${\\bar U}\\Phi Q H_u+{\\bar D}\\Phi Q H_d$ and Yukawa couplings $Y_U$ and $Y_D$ have the same form as the charged lepton Yukawa coupling. They are diagonalized by the same unitary matrix so CKM matrix is the identity in the leading order. If another type of flavon couples to either of up and down quark sectors to give subleading corrections of order $\\lambda$, it would explain the Cabibbo angle.
Lepton $L_i$ is in the [**3**]{} and $\\bar{E}_j$ is in the [**1+2**]{} representations, in which $(\\bar{E}_1)_{\\bf 1}+(\\bar{E}_2,\\bar{E}_3)_{\\bf 2}$. Also there are the SM singlet flavons $\\Phi_{\\bf 3}$, and $\\Phi_{\\bf 3^\\prime}$ in the ${\\bf 3}$, and ${\\bf 3^\\prime}$ representations. We do not provide a complete vacuum alignment in this setup. Instead in Appendix B, we show a few simple examples in which the aligned vacuum is realised. If, for instance, VEVs are arranged to be $\\langle\\Phi_{\\bf 3}\\rangle =v_2 (1,1,1)$, and $\\langle \\Phi_{\\bf 3^\\prime}\\rangle =v_3 (1,1,1)$, we have the following Yukawa structure [$$\\begin{split} Y_E=\\lambda_E \\frac{1}{\\sqrt3}
\\left(
\\begin{array}{ccc}
c &c& c\\\\
a & a\\omega & a\\omega^2 \\\\
b & b\\omega^2 & b\\omega
\\end{array}\\right) \\label{eq:leptonyukawa}
\\end{split}$$]{} where $a=(\\lambda v_2+ \\lambda^\\prime v_3)/\\Lambda$, $b=(\\lambda v_2-\\lambda^\\prime v_3)/\\Lambda$, $c=\\lambda^{\\prime \\prime}v_2/\\Lambda$, and $\\lambda, \\lambda^\\prime, \\lambda^{\\prime \\prime}$ are coupling constants of $\\bar{E}_{\\bf 2} L_{\\bf 3}\\Phi_{\\bf 3}$, $\\bar{E}_{\\bf 2} L_{\\bf 3}\\Phi_{\\bf 3^\\prime}$, and $\\bar{E}_{\\bf 1} L_{\\bf 3}\\Phi_{\\bf 3}$, respectively. In this case, $Y_E^\\dagger Y_E$ has the form of [$$\\begin{split} Y_E^\\dagger Y_E=|\\lambda_E|^2
\\left(
\\begin{array}{ccc}
a^2+b^2+c^2 & c^2+a^2\\omega+b^2\\omega^2 & c^2+b^2\\omega+a^2\\omega^2 \\\\
c^2+a^2\\omega^2+b^2\\omega & a^2+b^2+c^2 & c^2+a^2\\omega+ b^2\\omega^2 \\\\
c^2+b^2\\omega^2+a^2\\omega & c^2+a^2\\omega^2+b^2\\omega & a^2+b^2+c^2
\\end{array}\\right)
,\\end{split}$$]{} which will be diagonalized to $|\\lambda_E|^2((\\epsilon^3)^2, (\\epsilon)^2, 1)$ by the unitary matrix, [$$\\begin{split} V_L^l=\\frac{1}{\\sqrt3}
\\left(
\\begin{array}{ccc}
1&1&1\\\\
1& \\omega^2 & \\omega \\\\
1&\\omega &\\omega^2
\\end{array}\\right). \\label{leptonunit}
\\end{split}$$]{} Here we use $\\epsilon \\simeq m_\\mu/m_\\tau$ as the order parameter. Then, $c=\\epsilon^3$, $a=\\epsilon$ and $b=1$.
On the other hand, let heavy neutrinos $N_i$ be in the triplet ${\\bf 3}$. $\\Phi_{\\bf 3}$ and $\\Phi_{\\bf 3^\\prime}$ cannot couple to the combination $L_i N_j$ by $Z_4$ and U(1)$_L$ symmetries as well as the SM gauge symmetry. Since the combination $L_1N_1+L_2N_2+L_3N_3$ is a singlet, we naturally have the neutrino Dirac Yukawa coupling $Y_\\nu$ proportional to the identity. Finally, $XN_iN_j$ has again the form of ${\\bf 3} + {\\bf 3^\\prime}+{\\bf 1}+{\\bf 2}$. $\\Phi$’s cannot couple to it while singlet $\\chi_{\\bf 1}$ and triplet $\\chi_{\\bf 3}$ in the singlet and triplet representation can do, so we have the following Majorana mass term: [$$\\begin{split} M_N=
\\left(
\\begin{array}{ccc}
w_1 & 0&w_2 \\\\
0 &w_1 & 0 \\\\
w_2&0& w_1
\\end{array}\\right)
\\end{split}$$]{} where $\\langle \\chi_{\\bf 1} \\rangle=w_1$ and $\\langle \\chi_{\\bf 3} \\rangle=w_2(0,1,0)$, respectively. Therefore, the neutrino mass matrix $M_\\nu=-v_u^2 Y_\\nu^T M_N^{-1} Y_\\nu$ is diagonalized by [$$\\begin{split} V_L^\\nu=
\\left(
\\begin{array}{ccc}
\\frac{1}{\\sqrt2} & 0&-\\frac{1}{\\sqrt2}\\\\
0 &1 & 0 \\\\
\\frac{1}{\\sqrt2}&0& \\frac{1}{\\sqrt2}
\\end{array}\\right)
\\end{split}$$]{} so we obtain the PMNS matrix in the tri-bi maximal mixing, [$$\\begin{split} V_{\\rm PMNS}\\equiv (V_L^l)^\\dagger V_L^\\nu=
\\left(
\\begin{array}{ccc}
\\sqrt{\\frac{2}{3}} & \\frac{1}{\\sqrt3}&0\\\\
-\\omega\\frac{1}{\\sqrt6} &\\omega\\frac{1}{\\sqrt3} & e^{-i5\\pi/6}\\frac{1}{\\sqrt2}\\\\
-\\omega^2\\frac{1}{\\sqrt6}&\\omega^2\\frac{1}{\\sqrt3}& e^{i5\\pi/6}\\frac{1}{\\sqrt2}
\\end{array}\\right).
\\end{split}$$]{} In this construction, $S_4$ triplet flavons have VEVs in the direction of $(1,1,1)$ or $(0,1,0)$. These directions are easily stabilized compared to other directions, such as $(1,1,0)$, as argued in Appendix B. Note that $Y_\\nu$ proportional to the identity does not give rise to LFV. In Eq. (\\[eq:mlsqaure\\]), we see $m_L^2$ from the neutrino Dirac Yukawa mediation is flavor universal. In the right-handed neutrino and the charged lepton mass basis, $Y_\\nu$ moves to $ V_L^\\nu Y_\\nu$ and $m_L^2$ moves to $(V_L^l)^\\dagger m_L^2 V_L^l$. As a result, PMNS matrix is mulitplied and will change $m_L^2$ matrix. However, if the neutrino Dirac Yukawa matrix is proportional to the identity matrix, the property of $V \\mathbb{I} V^\\dagger = \\mathbb{I} V V^\\dagger = \\mathbb{I}$ cancels out such effects.
There are various ways to put corrections to make non-zero $\\theta_{13}$. Moreover, corrected neutrino mass matrix should be consistent with the measurements of $\\theta_{12}$, $\\theta_{23}$ as well as neutrino mass squared differences, $\\Delta m^2_{\\rm sol} \\equiv m_2^2-m_1^2$ and $|\\Delta m^2_{\\rm atm}| \\equiv |m_3^2 -m_2^2|$. Since the overall neutrino mass scale is not known, the important quantity is the ratio of neutrino mass squared differences, as described in [@BenTov:2012tg], [$$\\begin{split}\\sqrt{|R|} \\equiv \\sqrt{\\frac{\\Delta m^2_{\\rm atm}}{\\Delta m^2_{\\rm sol}}}.\\end{split}$$]{} The measured values adopted in [@Beringer:1900zz] are given by [$$\\begin{split}&\\Delta m^2_{\\rm sol}=(7.50\\pm 0.20)\\times 10^{-5} {\\rm eV}^2
\\\\
&\\Delta m^2_{\\rm atm}=(0.00232)^{+0.00012}_{-0.00008}{\\rm eV}^2 \\nonumber\\end{split}$$]{} [$$\\begin{split}&\\sin^2(2\\theta_{12})=0.857 \\pm 0.024
\\\\
&\\sin^2(2\\theta_{23})>0.95
\\\\
&\\sin^2(2\\theta_{13})=0.098 \\pm 0.013\\end{split}$$]{} in the 90% C. L. The global analysis for such quantities can be found in [@Fogli:2012ua; @GonzalezGarcia:2012sz].
Suppose, for simplicity, we leave the charged lepton sector untouched and correct neutrino sector only. Moreover, we keep the mixings of $\\nu_2$ with $\\nu_{1, 3}$ forbidden, so that $V_L^\\nu$ is modified to [$$\\begin{split} V_L^\\nu=
\\left(
\\begin{array}{ccc}
\\cos(\\frac{\\pi}{4}+\\delta) & 0&-\\sin(\\frac{\\pi}{4}+\\delta) \\\\
0 &1 & 0 \\\\
\\sin(\\frac{\\pi}{4}+\\delta)&0& \\cos(\\frac{\\pi}{4}+\\delta)
\\end{array}\\right).
\\end{split}$$]{} For small $\\delta$, $\\cos(\\frac{\\pi}{4}+\\delta) \\simeq (1/\\sqrt2)(1-\\delta)$ and $\\sin(\\frac{\\pi}{4}+\\delta) \\simeq (1/\\sqrt2)(1+\\delta)$. From [$$\\begin{split}&V_{\\rm PMNS}=(V_L^l)^\\dagger V_L^\\nu
\\\\
&=\\frac{1}{\\sqrt3}
\\left(
\\begin{array}{ccc}
1&1&1\\\\
1& \\omega & \\omega^2 \\\\
1&\\omega^2 &\\omega
\\end{array}\\right)
\\frac{1}{\\sqrt2}
\\left(
\\begin{array}{ccc}
(1-\\delta)&0&-(1+\\delta)\\\\
0& 1 & 0 \\\\
(1+\\delta)&0 &(1-\\delta)
\\end{array}\\right),\\end{split}$$]{} we see (13) element of the PMNS matrix is given by [$$\\begin{split}|V_{e3}|=\\Big|\\frac{2\\delta}{\\sqrt6}\\Big|.\\end{split}$$]{} If such corrections are entirely present in the right-handed neutrino Majorana mass term while $Y_\\nu$ is untouched, there would be no observable charged lepton flavor violating process. For example, let us introduce a doublet flavon $\\chi_{\\bf 2}$. Then, its VEV modifies the diagonal elements of the Majorana mass matrix. With $\\langle \\chi_2 \\rangle =x^2 (1,1)$, diagonal term has a correction $x^2[2N_1N_1-N_2N_2-N_3N_3]$. In principle, by introducing several doublets with different VEVs, each diagonal term can be different.
Model I {#Model I}
-------
Besides putting correction to $M_N$, one can find $S_4$ doublet VEVs giving corrections to $Y_\\nu$ to make a sizable $\\theta_{13}$ while dangerous charged lepton flavor violation is suppressed. To see this, consider the general $S_4$ doublet VEV, $(a, b)$ where $a$ and $b$ are complex numbers. With this VEV and coupling $\\lambda_1$, $Y_\\nu$ can be modified as [$$\\begin{split}\\left(
\\begin{array}{ccc}
1+\\lambda_1(a+b)& 0&0 \\\\
0 &1+\\lambda_1(b\\omega+a\\omega^2)& 0 \\\\
0&0& 1+\\lambda_1(b\\omega^2+a\\omega)
\\end{array}\\right)
\\end{split}$$]{} In this case, $Y_\\nu^\\dagger Y_\\nu$ in the charged lepton mass basis is given by [$$\\begin{split}\\left(
\\begin{array}{ccc}
1+\\lambda_1^2(|a|^2+|b|^2)& \\lambda_1(a^*+b)+\\lambda_1^2ab^*&\\lambda_1(a+b^*)+\\lambda_1^2a^*b \\\\
\\lambda_1(a+b^*)+\\lambda_1^2a^*b &1+\\lambda_1^2(|a|^2+|b|^2)& \\lambda_1(a^*+b)+\\lambda_1^2ab^* \\\\
\\lambda_1(a^*+b)+\\lambda_1^2ab^*&\\lambda_1(a+b^*)+\\lambda_1^2a^*b& 1+\\lambda_1^2(|a|^2+|b|^2)
\\end{array}\\right).
\\end{split}$$]{} If $\\lambda_1(a^*+b)+\\lambda_1^2ab^*=0$, all the off diagonal elements vanish. For example, $\\lambda_1=1$ and $a=b=\\omega$ is the case. This condition also implies that off diagonal terms of $Y_\\nu^\\dagger Y_\\nu Y_\\nu^\\dagger Y_\\nu$ vanish so we do not expect any sizeable cLFV. However, this condition requires a cancellation of two different flavon contributions and is considered as a serious fine tuning different from vacuum alignment. We do not pursue this possibility any longer in this paper.
If $a^*=-b$ and both $|a|$ and $|b|$ are smaller than one, the (12) element of $Y_\\nu$ is given by $-\\lambda_1^2(a^*)^2$. The (23) element is the same and the (13) element is its complex conjugate, $-\\lambda_1^2a^2$. In this way, LFV is suppressed quadratically even though it does not vanish. For $Y_\\nu^\\dagger Y_\\nu Y_\\nu^\\dagger Y_\\nu$ term, the (12) element is $2[1+\\lambda_1^2(|a|^2+|b|^2)][\\lambda_1(a^*+b)+\\lambda_1^2ab^*]+[\\lambda_1(a+b^*)+\\lambda_1^2a^*b]^2$. For $Y_\\nu^\\dagger Y_\\nu$ term, the (23) element is the same and the (13) element is its complex conjugate. When $a^*=-b$, it is $-2\\lambda_1^2a^2(1+2\\lambda_1^2|a|^2)+\\lambda_1^4(a^*)^4$, which is quadratically suppressed for small $a$. For illustration, suppose $\\lambda_1a=\\lambda_1b=i\\rho$. The stabilization of such doublet VEV is discussed in Appendix B. The neutrino Dirac Yukawa has the form of [$$\\begin{split}Y_\\nu=y_\\nu\\left(
\\begin{array}{ccc}
1+2i\\rho& 0&0 \\\\
0 &1-i\\rho& 0 \\\\
0&0& 1-i\\rho
\\end{array}\\right)
\\label{eq:modYnu}\\end{split}$$]{} and off-diagonal terms of $Y_\\nu^\\dagger Y_\\nu$ in the charged lepton mass basis is suppressed to ${\\cal O}(\\rho^2)$, as expected, [$$\\begin{split}(V_L^l)^\\dagger (Y_\\nu^\\dagger Y_\\nu) V_L^l=|y_\\nu|^2\\left(
\\begin{array}{ccc}
1+2\\rho^2& \\rho^2&\\rho^2 \\\\
\\rho^2 &1+2\\rho^2& \\rho^2 \\\\
\\rho^2&\\rho^2& 1+2\\rho^2
\\end{array}\\right).
\\end{split}$$]{} With this $Y_\\nu$, neutrino mass matrix is given by [$$\\begin{split}M_\\nu=-|y_\\nu|^2 \\frac{v^2\\sin^2\\beta}{2w_1}\\frac{1}{1-x^2}\\left(
\\begin{array}{ccc}
1+4i\\rho-4\\rho^2& 0&-x(1+i\\rho+2\\rho^2)\\\\
0 &(1-x^2)(1-2i\\rho-\\rho^2)& 0 \\\\
-x(1+i\\rho+2\\rho^2)&0& 1-2i\\rho-\\rho^2
\\end{array}\\right)
\\end{split}$$]{} and the deviation of mixing from $\\pi/4$ is given by [$$\\begin{split}\\delta=\\Big|\\frac{-6i\\rho+3\\rho^2}{4x(1
+i\\rho+2\\rho^2)}\\Big|\\simeq \\frac{3\\rho}{2x}\\end{split}$$]{} such that [$$\\begin{split}|V_{e3}|\\simeq \\frac{3\\rho}{\\sqrt6 x}.\\end{split}$$]{}
To the first order in $\\rho$, mass eigenvalues are given by [$$\\begin{split}-|y_\\nu|^2 \\frac{v^2\\sin^2 \\beta}{2w_1}\\Big(\\frac{1+i\\rho}{1+x}, 1-2i\\rho, \\frac{1+i\\rho}{1-x}\\Big).\\end{split}$$]{} Taking absolute values of these eigenvalues, we obtain neutrino masses $-[|y_\\nu|^2 v^2\\sin ^2 \\beta/(2w_1)](1/(1+x), 1, 1/(1-x))+{\\cal O}(\\rho^2)$.
In summary, we expect that even though the charged lepton flavor violating effects are generated in the $A_E$ term at one loop and in the $m_L^2$ term at two loop, they can be suppressed by extra small expansion parameter $\\rho$ proportional to $\\theta_{13}$. With the vacuum alignment of the doublet flavon $i(v,v)$, it is possible to cancel the first order correction of $\\rho$ and the off-diagonal elements of the slepton mass squared would have $\\rho^2$ suppression as a result. Fig. \\[fig:theta13\\] shows how measured $\\theta_{13}$ can be explained for the choices of $\\rho$ and $x$ parameters satisfying observed neutrino mass squared ratio, $\\sqrt{R}$. The observed $\\theta_{13} \\sim 0.15$ can be accommodated for $\\rho \\sim 0.1$.
![ $\\theta_{13}$ with respect to $\\rho$ and $x$ parameters. All points in the colored region satisfy neutrino oscillation experiments. Neutrino $\\theta_{13}$, indicated on contour label in radian, is measured as $0.144 < \\theta_{13} < 0.160$ in $1 \\sigma$ level, $0.127 < \\theta_{13} < 0.174$ in $3 \\sigma$ level. []{data-label="fig:theta13"}](nuoscrpx.eps){width="65.00000%"}
Model II {#Model II}
--------
Of course, $\\theta_{13}$ can come from both Majorana mass correction and neutrino Dirac Yukawa correction. Only the neutrino Dirac Yukawa coupling can affect the cLFV. To see the two-parameter case, consider [$$\\begin{split}Y_\\nu=y_\\nu \\left(
\\begin{array}{ccc}
1+2i\\rho & 0&0\\\\
0 &1 -i\\rho& 0 \\\\
0&0& 1-i\\rho
\\end{array}\\right)\\end{split}$$]{} and [$$\\begin{split} M_N=
\\left(
\\begin{array}{ccc}
w_1 & 0&w_2 \\\\
0 &w_1 & 0 \\\\
w_2&0& w_1(1-\\zeta)
\\end{array}\\right)
.\\end{split}$$]{}
The neutrino mass is given by [$$\\begin{split} M_\\nu=-|y_\\nu|^2 \\frac{v^2\\sin^2\\beta}{2w_1}\\frac{1}{1-x^2-\\zeta}
\\left(
\\begin{array}{ccc}
1+4i\\rho-\\zeta& 0&-x(1+i\\rho) \\\\
0 &(1-x^2)(1-2i\\rho)-\\zeta & 0 \\\\
-x(1+i\\rho)&0& 1-2i\\rho
\\end{array}\\right) +{\\cal O}(\\rho^2, \\zeta^2)
\\end{split}$$]{} where $x=w_2/w_1$ again. Then, three neutrino mass eigenvalues are given by [$$\\begin{split}-|y_\\nu|^2 \\frac{v^2\\sin^2\\beta}{2w_1}\\Big( \\frac{1+i\\rho}{1+x}+\\frac{\\zeta}{2(1+x)^2}, 1-2i\\rho, \\frac{1+i\\rho}{1-x}+\\frac{\\zeta}{2(1-x)^2}\\Big)\\end{split}$$]{} and [$$\\begin{split}\\delta = \\frac{\\sqrt{36 \\rho^2+\\zeta^2}}{4x}.\\end{split}$$]{} Hence, we see the (13) element of the PMNS matrix is given by [$$\\begin{split}|V_{e3}|=\\Big|\\frac{2\\delta}{\\sqrt6}\\Big|=\\Big|\\frac{\\sqrt{36 \\rho^2+\\zeta^2}}{2\\sqrt6 x}\\Big|.\\end{split}$$]{} Moreover, $m_L^2$ from Yukawa mediation is controlled by the parameter $\\rho$ only and $(V_L^l)^\\dagger (Y_\\nu^\\dagger Y_\\nu) V_L^l$ is the same as the previous case, [$$\\begin{split}(V_L^l)^\\dagger (Y_\\nu^\\dagger Y_\\nu) V_L^l=|y_\\nu|^2\\left(
\\begin{array}{ccc}
1+2\\rho^2& \\rho^2&\\rho^2 \\\\
\\rho^2 &1+2\\rho^2& \\rho^2 \\\\
\\rho^2&\\rho^2& 1+2\\rho^2
\\end{array}\\right).
\\end{split}$$]{}
In the limit of $\\zeta \\to 0$, both $\\theta_{13}$ and cLFV come from the neutrino Dirac Yukawa which corresponds to the Model I. In the opposite limit, $\\rho \\to 0$, $\\theta_{13}$ is entirely obtained from Majorana mass term and cLFV does not appear. In addition, we can also constrain absolute mass scale of light neutrinos. The most stringent constraint on neutrino absolute mass is given by CMB data of WMAP experiment, combined with supernovae data and data on galaxy clustering, $\\Sigma_{j} m_{j} \\lesssim 0.68$eV, 95% C.L. Conservatively, we set the bound $2.6 \\times 10^{14 } {\\,\\textrm{GeV}}\\lesssim M_N$. Throughout paper, we use $M_N = 5 \\times 10^{14}$GeV, the heaviest right-handed neutrino mass.
Charged Lepton Flavor Violation {#sec:LFV}
===============================
Since flavor structures of supersymmetric particles can be different from those of SM partners, flavor number is easily violated in SUSY. In general, the structure of the slepton mass matrix raises dangerous cLFV. Such cLFV in SUSY is studied in [@Hisano:1995cp; @Arganda:2005ji]. In our model, once the identity structure of the neutrino Yukawa coupling $Y_\\nu$ is broken, cLFV is produced. As a possible modification, one may put off-diagonal terms into $Y_\\nu$. On the other hand, when the degeneracy of $Y_\\nu$ is broken, the combination of $Y_\\nu$s in the charged lepton mass basis, $(V_L^l)^\\dagger(Y_\\nu^\\dagger Y_\\nu)V_L^l$ has off-diagonal terms as shown in Sec. \\[sec:FlavorModel\\]. The slepton mass squared gets extra contribution from neutrino Dirac Yukawa interactions, [$$\\begin{split}\\delta m_{L}^2 = \\frac{B_N^2}{(4 \\pi)^4} \\Big[ \\Big({\\rm Tr} [Y_{\\nu} Y_{\\nu}^\\dagger]+3{\\rm Tr} [Y_U Y_U^\\dagger] - 3 g_2^2 - \\frac{1}{5} g_1^2 \\Big) Y_{\\nu}^\\dagger Y_{\\nu} + 3 Y_{\\nu}^\\dagger Y_{\\nu} Y_{\\nu}^\\dagger Y_{\\nu} \\Big].\\end{split}$$]{} In the charged lepton mass basis, $(V_L^l)^\\dagger m_L^2 V_L^l$ has off-diagonal elements and cLFV appears. Even though this is a general feature, it is also possible to find some parameter space in which charged lepton number is conserved. For example, in Sec. \\[Model I\\], off diagonal terms of the slepton soft mass squared, $(m_L^2)_{12}$ can vanish for specific value of $y_\\nu$. Corresponding condition would be [$$\\begin{split}(\\delta m_L^2)_{12}&\\propto \\Big[{\\rm Tr}[Y_\\nu^\\dagger Y_\\nu]+3{\\rm Tr} [Y_U Y_U^\\dagger] -3g_2^2-\\frac15 g_1^2\\Big](Y_\\nu^\\dagger Y_\\nu)_{12}+3(Y_\\nu^\\dagger Y_\\nu Y_\\nu^\\dagger Y_\\nu)_{12}=0,\\end{split}$$]{} which is equivalent to [$$\\begin{split}(\\delta m_L^2)_{12}&\\propto \\Big[3(1+2\\rho^2)y_\\nu^2+3y_t^2-3g_2^2-\\frac15 g_1^2\\Big]y_\\nu^2 \\rho^2+3y_\\nu^4 2\\rho^2
\\\\
& \\simeq y_\\nu^2 \\Big[3y_t^2-3g_2^2-\\frac15 g_1^2\\Big]\\rho^2+9y_\\nu^4 \\rho^2+{\\cal O}(\\rho^4)=0.\\end{split}$$]{} Near the GUT scale, $g_1^2 \\simeq g_2^2 \\simeq 4\\pi/28$, and $y_t \\simeq 0.5$ so off diagonal term vanishes for $y_\\nu \\simeq 0.28$. For this value of $Y_\\nu$, there would be no unwanted cLFV. This is different from the condition that diagonal contribution involving $Y_\\nu$ vanishes, [$$\\begin{split}(\\delta m_L^2)_{ii}&\\propto\\Big[3(1+2\\rho^2)y_\\nu^2+3y_t^2-3g_2^2-\\frac15 g_1^2\\Big]y_\\nu^2(1+2\\rho^2)+3y_\\nu^4(1+8\\rho^2)
\\\\
& \\simeq y_\\nu^2 \\Big[3y_t^2-3g_2^2-\\frac15 g_1^2\\Big]+6y_\\nu^4 +{\\cal O}(\\rho^2)=0\\end{split}$$]{} which is satisfied for $y_\\nu \\simeq 0.34$.
Of course, it does not mean that $y_\\nu$ should take the lepton number conserving value. We have many constraints on $y_\\nu$ from various observations. In this paper, we try to explain the $125$GeV Higgs mass with large A term generated from $y_\\nu$. On the other hand, one may try to explain deviation of muon $g-2$ from the SM prediction. Moreover, degeneracy breaking parameter $\\rho$ is used to explain sizable $\\theta_{13}$. However, it is also difficult to find an appropriate value of $y_\\nu$ which satisfies all of them. In this section, we present the cLFVs for parameters explaining the $125$GeV Higgs mass with large A term and $\\theta_{13}$. Thereafter, we visit the muon $g-2$ constraints and the relation among $\\theta_{13}$, cLFV, and the Higgs mass.
Experimental status {#subsec:status}
-------------------
------------------------------------------------------------------------------------------- ------------------------------ -----------------------------
Observables Experimental bound Future sensitivity
\\[0.2em\\]
\\[-1.1em\\] ${\\rm Br}(\\mu \\to e \\gamma)$ $2.4 \\times 10^{-12} [63]$ ${\\cal O}(10^{-13})$ \\[63\\]
\\[0.4em\\] ${\\rm Br}(\\tau \\to \\mu \\gamma)$ $4.4 \\times 10^{-8}$\\[64\\] $2.4\\times10^{-9}$\\[69\\]
\\[0.4em\\] ${\\rm Br}(\\tau \\to e \\gamma)$ $3.3 \\times 10^{-8}$\\[64\\] $3.0\\times10^{-9}$ \\[69\\]
\\[0.4em\\] ${\\rm Br}(\\mu \\to 3e)$ $1.0 \\times 10^{-12}$ \\[65\\] ${\\cal O}(10^{-16})$ \\[70\\]
\\[0.4em\\] ${\\rm Br}(\\tau \\to 3e)$ $2.7 \\times 10^{-8}$\\[66\\] $2.3\\times10^{-10}$ \\[69\\]
\\[0.4cm\\] ${\\rm Br}(\\tau \\to 3 \\mu)$ $2.1 \\times 10^{-8}$\\[66\\] $8.2\\times10^{-10}$\\[69\\]
\\[0.4em\\] $\\frac{\\Gamma(\\mu{\\rm Ti}\\to e{\\rm Ti})}{\\Gamma(\\mu{\\rm Ti}\\to {\\rm capture})}$ $4.3 \\times 10^{-12}$\\[67\\] ${\\cal O}(10^{-18})$\\[71\\]
\\[0.4em\\] $\\frac{\\Gamma(\\mu{\\rm Au}\\to e{\\rm Au})}{\\Gamma(\\mu{\\rm Au}\\to {\\rm capture})}$ $7.0 \\times 10^{-13}$\\[68\\]
\\[0.4em\\]
------------------------------------------------------------------------------------------- ------------------------------ -----------------------------
: Various LFV experimental bounds and future sensitivities. The table is adopted from [@Abada:2012cq].[]{data-label="table:LFVexp"}
The current experimental bounds and future sensitivities for various cLFV processes in the 90% C. L. are summarised in Table \\[table:LFVexp\\][@Beringer:1900zz; @Hewett:2012ns].
![Feynman diagrams for $l_j \\to l_i \\gamma$ process with neutralino-charged slepton internal lines in the mass insertion scheme. []{data-label="fig:LFVn"}](LFV1a.eps "fig:"){width="45.00000%"} [![Feynman diagrams for $l_j \\to l_i \\gamma$ process with neutralino-charged slepton internal lines in the mass insertion scheme. []{data-label="fig:LFVn"}](LFV1b.eps "fig:"){width="45.00000%"}]{} 1.0cm [![Feynman diagrams for $l_j \\to l_i \\gamma$ process with neutralino-charged slepton internal lines in the mass insertion scheme. []{data-label="fig:LFVn"}](LFV1c.eps "fig:"){width="45.00000%"}]{}
![Feynman diagrams for $l_j \\to l_i \\gamma$ process with chargino-sneutrino internal lines in the mass insertion scheme. []{data-label="fig:LFVc"}](LFV2a.eps "fig:"){width="45.00000%"} [![Feynman diagrams for $l_j \\to l_i \\gamma$ process with chargino-sneutrino internal lines in the mass insertion scheme. []{data-label="fig:LFVc"}](LFV2b.eps "fig:"){width="45.00000%"}]{}
![ Branching ratios of $\\mu \\to e\\gamma$, $\\tau \\to \\mu\\gamma$ and $\\tau \\to e\\gamma $ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ljligamma"}](meg.eps "fig:"){width="70.00000%"} 0.5cm ![ Branching ratios of $\\mu \\to e\\gamma$, $\\tau \\to \\mu\\gamma$ and $\\tau \\to e\\gamma $ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ljligamma"}](tmg.eps "fig:"){width="50.00000%"} 0.5cm ![ Branching ratios of $\\mu \\to e\\gamma$, $\\tau \\to \\mu\\gamma$ and $\\tau \\to e\\gamma $ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ljligamma"}](teg.eps "fig:"){width="50.00000%"}
$l_j \\to l_i \\gamma$
--------------------
The amplitude for $l_j\\to l_i\\gamma$ is written as [$$\\begin{split}T=e\\epsilon^{\\mu *}\\overline{u_i}(p-q)\\Big[q^2\\gamma_\\mu(A^L_1P_L+A_1^RP_R)
+m_{l_j}i\\sigma_{\\mu\\nu}q^\\nu(A_2^LP_L+A_2^RP_R)\\Big]u_j(p).\\end{split}$$]{} On the mass shell ($q^2\\to0$), gauge invariance imposes that the chirality preserving part does not contribute to the $l_j\\to l_i\\gamma$ process. Hence, chirality flipping should take place in the on-shell $l_j\\to l_i\\gamma$ process. The decay rate is given by [$$\\begin{split}\\Gamma(l_j\\to l_i\\gamma) =\\frac{e^2}{16\\pi}m_{l_j}^5(|A^L_2|^2+|A^R_2|^2)\\end{split}$$]{} and the branching ratio yields approximately [$$\\begin{split}{\\rm Br}(l_j \\to l_i \\gamma)\\sim \\frac{\\alpha^3}{G_F^2}\\frac{1}{m_{\\rm SUSY}^4} \\Big(\\frac{(m_L^2)_{ij}}{m_{\\rm SUSY}^2}\\Big)^2.\\end{split}$$]{}
In the mass insertion scheme, the chirality flipping can be easily analyzed. Consider first the case of the neutralino-charged slepton internal loop, as shown in Fig. \\[fig:LFVn\\]. Fig. \\[fig:LFVn\\] (a) shows the chirality flipping from a fermion mass insertion in the external lepton line. In Fig. \\[fig:LFVn\\] (b), chirality flipping takes place in the slepton internal line through the LR mixing insertion, $m_j(A-\\mu\\tan\\beta)$. This term consists of flavor universal part $-m_j\\mu\\tan\\beta$, which can be enhanced in the limit of large $\\tan\\beta$ and large $\\mu$. The chirality flipping in Fig. \\[fig:LFVn\\] (c) is given by the Yukawa coupling of the lepton-slepton-Higgsino vertex. This vertex contains $1/\\cos\\beta$ factor which combines with a $\\sin\\beta$ in the Higgsino-gaugino mixing to give a $\\tan\\beta$ dependence to the diagram. Therefore, this diagram is enhanced in the large $\\tan\\beta$ limit. Note that it is inversely proportional to the $\\mu$, the Higgsino mass. Since this diagram contains SUSY mass scale only, unlike other diagrams proportional to the Higgs VEV $v$ through $m_j$, it is dominant over all other diagrams with the neutralino-charged slepton internal loop in many cases. However, since the Higgsino-bino mass insertion $M_Z\\sin\\beta\\sin\\theta_W$ and Higgsino-wino mass insertion $-M_Z\\sin\\beta\\cos\\theta_W$ have the opposite signs, slight destructive interference occurs.
Next, the case of the chargino-sneutrino internal loop is shown in Fig. \\[fig:LFVc\\]. Diagrams are similar to those of the neutralino-charged slepton internal loop, except the absence of the slepton LR mixing, since the right handed neutrinos are already integrated out. Chirality flipping can occur either in the external lepton line (Fig. \\[fig:LFVc\\] (a)) or in the lepton-sneutrino-Higgsino vertex(Fig. \\[fig:LFVc\\] (b)). The latter diagram dominates over the former one, and since it does not have a destructive interference, it becomes the leading contribution over all other diagrams in many cases. The similar argument also applies to the discussion of muon $g-2$, whose SUSY contribution comes from the same diagram with flavor conservation. Following this diagram, SUSY enhances the muon $g-2$ for positive $\\mu$[@Lopez:1993vi; @Chattopadhyay:1995ae].
In Fig. \\[fig:ljligamma\\], we show branching ratios of various $l_j\\to l_i \\gamma$ processes for Sec. \\[Model I\\]. In the graph, neutrino Dirac Yukawa couplings are fixed to be $y_\\nu=0.65$ and $\\rho=0.1$, while $\\tan\\beta$ and SUSY breaking scale are varied. Since off-diagonal terms of $m_L^2$ in the charged lepton mass basis are the same, normalized branching ratio, $\\Gamma(l_j \\to l_i\\gamma)/m_j^5$ are almost identical. Therefore, branching ratios are closely related to the total decay rate of mother particle. For example, since total decay rate of tau is about $5.3$ times larger than that of muon, branching ratio of Br$(\\mu \\to e \\gamma)$ is about $5.3$ times larger than Br$(\\tau \\to e \\gamma)$ and Br$(\\tau \\to \\mu \\gamma)$ which are almost the same.
$l_j^- \\to l_i^-l_i^-l_i^+$
---------------------------
In many cases, dominant contribution comes from the photon penguin. $Z$ boson penguin is suppressed in general because of the accidental cancellation when the neutralino or chargino is pure gaugino or pure Higgsino[@Hirsch:2012ax]. Such accidental cancellation is broken by introducing TeV scale physics which couples to the sneutrino with the sizable coupling. This can be realised in the R-parity violating model or in the TeV inverse seesaw, for example[@Hirsch:2012ax; @Abada:2012cq].
In our case, photon penguin is a leading contribution, so we have a simple relation between Br$(l_j \\to l_i \\gamma)$, [$$\\begin{split}\\frac{{\\rm Br}(l_j\\to3l_i)}{{\\rm Br}(l_j \\to l_i \\gamma)}=\\frac{\\alpha}{3\\pi}\\Big(\\ln\\frac{m_{l_j}^2}{m_{l_i}^2}-\\frac{11}{4}\\Big).\\label{eq:l3lllg}\\end{split}$$]{} The box diagram is suppressed in general, except for some special cases, such as in SUSY with Dirac gauginos[@Fok:2010vk].
In Fig. \\[fig:ltolll\\], we show branching ratios of various $l_j\\to 3l_i $ processes for for Sec. \\[Model I\\]. Fixed parameters are the same as $l_j\\to l_i \\gamma$ process. We see that ${\\rm Br}(\\mu \\to 3e)$ is about 0.018 times suppressed than ${\\rm Br}(\\mu \\to e \\gamma)$ so Eq. (\\[eq:l3lllg\\]) is satisfied. Photon penguin is a leading contribution for $\\mu \\to 3e$ process. In the absence of special characteristic which can overcome the natural size of the branching ratio, Br($l_j^- \\to l_i^-l_i^-l_i^+$) is $\\alpha/\\pi$ suppressed compared to Br($l_j^- \\to l_i^- \\gamma$).
![ Branching Ratios of $\\mu \\to 3e$, $\\tau \\to 3e$ and $\\tau \\to 3\\mu$ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ltolll"}](meee.eps "fig:"){width="70.00000%"} 0.5cm ![ Branching Ratios of $\\mu \\to 3e$, $\\tau \\to 3e$ and $\\tau \\to 3\\mu$ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ltolll"}](teee.eps "fig:"){width="50.00000%"} 0.5cm ![ Branching Ratios of $\\mu \\to 3e$, $\\tau \\to 3e$ and $\\tau \\to 3\\mu$ with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ltolll"}](tmmm.eps "fig:"){width="50.00000%"}
$\\mu-e$ conversion
------------------
Conversion of the stopped muons in a nuclei to the electron is a promising channel to look for the charged lepton flavor violation. In principle there are many different operators including scalar, photon mediated vector, $Z$-boson mediated vector operators in addition to the dipole operator. Muon to electron conversion rate is conventionally normalised by muon capture rate. [$$\\begin{split}
B_{\\mu \\to e} (Z) = \\frac{\\Gamma_{\\rm conv} (Z,A)}{\\Gamma_{\\rm capt}(Z,A)}.
\\end{split}$$]{} Here $Z$ is the atomic number of the atom. Different target provide a different $B_{\\mu \\to e}(Z)$ and relative ratio of $B_{\\mu \\to e}(Z)$ of at least two different target can provide information on possible types of the operators as different operators predict different ratios. In supersymmetric models[@Kitano:2002mt], dominant contribution to $\\mu-e$ conversion comes from the dipole operator. As a result, $B(\\mu \\to e)(Z)$ is predicted to be suppressed by $\\alpha/\\pi$ compared to $B(\\mu \\to e \\gamma)$. For different choice of $Z$, the conversion is suppressed by $10^{-3} \\sim 5 \\times 10^{-3}$. Current limit on the conversion rate is comparable to $\\mu \\to e \\gamma$ branching ratio, but the future experiments on $\\mu$ to e conversion will have better sensitivity. We plot $\\mu-e$ conversion rate with the expected future sensitivity of planned experiments in Fig.\\[fig:ueconv\\].
![ $\\mu-e$ conversion rate with respect to the lightest selectron mass for $\\tan\\beta=3, 10, 30$, $y_\\nu=0.65$ and $\\rho=0.1$. []{data-label="fig:ueconv"}](mec.eps){width="70.00000%"}
Correlation between Muon $g-2$, $\\theta_{13}$, cLFV and the Higgs {#sec:mug-2}
------------------------------------------------------------------
![ Branching ratio of $\\mu \\to e \\gamma$ as a function of $\\theta_{13}$ for $\\tan \\beta = 10$, $y_{\\nu}= 0.62$, $\\rho = 0.1$. Future MEG expected bound is $O(10^{-13})$, we set the value $2 \\times 10^{-13}$. Observed muon $g-2$ discrepancy is about $(2.25 \\pm 1) \\times 10^{-9} $, we draw Br($\\mu \\to e \\gamma$) at each muon $g-2$ contribution. Green and yellow band indicate $1 \\sigma$, $3 \\sigma$ level of neutrino $\\theta_{13}$, respectively. In upper figure, $\\theta_{13}$ is purely obtained from neutrino Dirac Yukawa splitting. In lower figure, only $1/15$ portion of $\\theta_{13}$ is obtained from neutrino Dirac Yukawa. []{data-label="fig:connection"}](theta13.eps "fig:"){width="70.00000%"} ![ Branching ratio of $\\mu \\to e \\gamma$ as a function of $\\theta_{13}$ for $\\tan \\beta = 10$, $y_{\\nu}= 0.62$, $\\rho = 0.1$. Future MEG expected bound is $O(10^{-13})$, we set the value $2 \\times 10^{-13}$. Observed muon $g-2$ discrepancy is about $(2.25 \\pm 1) \\times 10^{-9} $, we draw Br($\\mu \\to e \\gamma$) at each muon $g-2$ contribution. Green and yellow band indicate $1 \\sigma$, $3 \\sigma$ level of neutrino $\\theta_{13}$, respectively. In upper figure, $\\theta_{13}$ is purely obtained from neutrino Dirac Yukawa splitting. In lower figure, only $1/15$ portion of $\\theta_{13}$ is obtained from neutrino Dirac Yukawa. []{data-label="fig:connection"}](theta13partial.eps "fig:"){width="70.00000%"}
![ Contour plot of Higgs mass(red solid line), cLFV(black dashed line), and muon $g-2$(blue dashed line) in $B_N$ - $y_\\nu$ plane for $\\rho = 0.1$, $\\tan \\beta = 30$. []{data-label="fig:BNYN"}](BNYN.eps){width="70.00000%"}
The anomalous magnetic moment of muon (muon $g-2$) has a long standing sizable deviation from the SM prediction. The observed value is [@Bennett:2006fi] [$$\\begin{split}a_\\mu({\\rm Exp})=11 659 208.9(6.3)\\times10^{-10}, \\end{split}$$]{} whereas the SM prediction[@Hagiwara:2011af] is given by [$$\\begin{split}a_\\mu({\\rm SM})=11 659 182.8(4.9)\\times10^{-10}\\end{split}$$]{} so we may have new physics contribution explaining the 3.3 $\\sigma$ discrepancy, [$$\\begin{split}\\delta a_\\mu\\equiv a_\\mu({\\rm Exp})-a_\\mu({\\rm SM})=(26.1\\pm 8.0)\\times10^{-10}.\\end{split}$$]{} In the context of SUSY [@Moroi:1995yh; @Cho:2011rk], muon $g-2$ has the same Feynman diagram structure as the cLFV process $\\mu \\to e\\gamma$. The crucial difference is that muon $g-2$ is flavor-conserving process, while $\\mu \\to e\\gamma$ violates lepton flavor, $L_\\mu$ and $L_e$. Therefore, $\\delta a_\\mu$ and ${\\rm Br}(\\mu \\to e\\gamma)$ have a strong correlation[@Hisano:2001qz], [$$\\begin{split} {\\rm Br}(\\mu \\to e\\gamma)\\simeq 3\\times10^{-5}\\Big(\\frac{\\delta a_\\mu^{\\rm SUSY}}{10^{-9}}\\Big)^2\\Big(\\frac{(m_L^2)_{12}}{m_{\\rm SUSY}^2}\\Big)^2.\\end{split}$$]{} Moreover, the neutrino Dirac Yukawa $Y_\\nu$ contains information on the neutrino oscillation observables. Since we consider the model where parameters of $Y_\\nu$ are related to $\\theta_{13}$ and cLFV, we have a strong correlation between Br$(\\mu \\to e \\gamma)$, $\\theta_{13}$, and muon $g-2$ as discussed in [@Hisano:2001qz].
Fig. \\[fig:connection\\] summarises the result. Both muon anomalous magnetic moment and cLFV is a function of $\\tan \\beta/M^2$ in which $M$ is the typical supersymmetry breaking scale. The cLFV has extra suppression proportional to $(m_{L}^2)_{12}$. The $S_4$ flavor model discussed here is constructed from the neutrino Dirac Yukawa matrix which is proportional to the identity matrix and does not provide any off-diagonal entry in the slepton mass squared matrix if $\\theta_{13}$ vanishes. Recently measured $\\theta_{13} \\sim 0.15$ provides an extra information depending on the origin of modification for nonzero $\\theta_{13}$.
If the full $\\theta_{13}$ is explained by the degeneracy lift of the neutrino Dirac Yukawa matrix and if the entire discrepancy of the muon anomalous magnetic moment should be explained by light slepton, the current MEG bound tells that $\\theta_{13}$ should be smaller than 0.01 which is incompatible with the observation recently made. The parameter space which is consistent with $\\mu \\to e \\gamma$ bound and the $\\theta_{13}$ predicts that muon anomalous magnetic moment is at least 1/20 times smaller than what is needed.
If the nonzero $\\theta_{13}$ is entirely generated by modifying the neutrino Majorana mass matrix, there would be no cLFV even for the sizeable $\\theta_{13}$. In reality, the subleading corrections in the simple flavor model would appear in both sectors and the observed $\\theta_{13}$ would be a combined result of various sources. The bottom plot of Fig. \\[fig:connection\\] shows the hybrid case in which only $1/15$ of the $\\theta_{13}$ is from the neutrino Dirac Yukawa modification. In this case $\\theta_{13}$, muon anomalous magnetic moment can be explained at the same time. The $\\mu \\to e \\gamma$ bound is satisfied and the the consistent region can be reached by the planned future MEG experiment as it predicts larger branching ratio of $\\mu \\to e \\gamma$ than the planned expected sensitivity.
Fig. \\[fig:BNYN\\] shows the tension between the muon $g-2$ and the Higgs mass. Even if we take the model in which the neutrino Dirac Yukawa remains to be proportional to the identiy matrix such that no cLFV constraints apply, 125 GeV Higgs mass needs $B_N$ much larger thna 300 TeV. Then the slepton is too heavy and the muon $g-2$ is much smaller than $10^{-9}$. The figure also shows an interesting feature that the off-diagonal elements of $m_L^2$ vanish at around $y_\\nu = 0.3$ and the cLFV bounds are very weak at around $y_\\nu = 0.3$.
Conclusion
==========
We considered the right-handed neutrinos as messengers of supersymmetry breaking in the minimal gauge mediation. Direct coupling of neutrino messenger with the Higgs field $H_u$ and the lepton doublets $L_i$ provides soft-trilinear $A$ term for the top Yukawa and can help increase the light Higgs mass by realising the maximal stop mixing scenario. We call this setup as ’neutrino assisted gauge mediation’. The Yukawa mediation given by neutrino messengers also appear at soft scalar masses of the Higgs $H_u$, the lepton doublets $L_i$. At the same time it affects the soft scalar masses of the fields which couple to $H_u$ and $L_i$ at two loop. Among those, the stop mass squared gets the largest correction as the top Yukawa coupling is of order one. For $y_{\\nu}$ slightly larger than $0.7$, the correction is big enough to make stop tachyonic. Therefore, this realises the natural supersymmetry spectrum. At the same time maximal mixing is achieved by two effects, large $A_t$ and small $m_{\\tilde{t}}^2$ at around $y_\\nu \\sim 0.7$. In general this effect allows to explain the observed Higgs mass at around 125 GeV using around 1 TeV stop mass. Compared to the case when the neutrino assistance is turned off ($y_\\nu=0$), about 5 GeV of the Higgs mass is enhanced.
In general the off-diagonal entry of the slepton mass squared, $m_{L}^2$, appears at the messenger scale and can make the charged lepton flavor violating process to occur. The detailed quantitative prediction of cLFV highly depends on flavor model building. We provided a representative model based on $S_4$ flavor symmetry in which the Dirac neutrino Yukawa can be proportional to the identity if $\\theta_{13}=0$. For nonzero $\\theta_{13}$, two options were considered. Firstly, the total $\\theta_{13}$ can be explained by the modification of the neutrino Dirac Yukawa matrix. Secondly, the $\\theta_{13}$ can be explained by modifying the Majorana mass matrix of neutrinos. For the former, very stringent bound on the slepton mass comes from $\\mu \\to e \\gamma$ and the slepton should be heavier than $2 \\sim 4$ TeV, depending on $\\tan \\beta$. Also for the slepton mass at around 2 TeV with $\\tan \\beta = 10$, the $\\mu \\to e \\gamma$ is just below the current experimental bound and we expect to observe the $\\mu \\to e \\gamma$ in the near future.
Even for the second case in which we can safely avoid cLFV constraints, the neutrino assisted gauge mediation (in its minimal form with one copy of $5$ and $\\bar{5}$ messenger) sets the lower bound on the slepton mass to explain the Higgs mass. 1 $\\sim$ 2 TeV slepton mass at the same time sets an upper bound on the possible contribution to muon anomalous magnetic moment and $a_\\mu \\sim 10^{-10}$ is the upper bound.
In this paper we proposed the neutrino assisted gauge mediation and showed a possible way out to avoid the strong cLFV constraints. Even then the current scheme has a tension with the muon anomalous magnetic moment which needs a lighter slepton. The extension of the minimal neutrino assisted gauge mediation to multiple messengers might ameliorate the tension between the spectrum needed to explain the Higgs mass and the muon anomalous magnetic moment.
This work is supported by the NRF of Korea No. 2011-0017051.
Appendix 0: Sparticle Spectrum Sample Point {#sec:App0 .unnumbered}
===========================================
------------------------------------------------------------------- -------------------------------------------------- --------------------------------------------------
$(\\tan \\beta = 10 , B_N = 360 {\\,\\textrm{TeV}})$ $(\\tan \\beta = 30 , B_N = 300 {\\,\\textrm{TeV}})$
\\[0.2em\\]
\\[-1.1em\\] $\\tilde{\\nu}_e, \\tilde{\\nu}_{\\mu}, \\tilde{\\nu}_{\\tau}$ 2957, 2961, 3013 2429, 2465, 2502
\\[0.2em\\] $\\tilde{e}_1, \\tilde{\\mu}_1, \\tilde{\\tau}_1$ 1364, 1364, 1333 1139, 1138, 880
\\[0.2em\\] $\\tilde{e}_2, \\tilde{\\mu}_2, \\tilde{\\tau}_2$ 3013, 2962, 2954 2503, 2467, 2427
\\[0.2em\\] $\\tilde{u}_1 ,\\tilde{c}_1, \\tilde{t}_1$ 2827, 2827, 634 2384, 2384, 637
\\[0.2em\\] $\\tilde{d}_1, \\tilde{s}_1, \\tilde{b}_1$ 2853, 2853, 2820 2406, 2406, 2283
\\[0.2em\\] $\\tilde{u}_2, \\tilde{c}_2, \\tilde{t}_2$ 3177, 3177, 2252 2675, 2675, 1868
\\[0.2em\\] $\\tilde{d}_2, \\tilde{s}_2 ,\\tilde{b}_2$ 3178, 3178, 2297 2676, 2676, 1893
\\[0.2em\\] $h_0, A, H_0 ,H_{\\pm} $ 125, 1705, 1705, 1707 125, 1031, 1031, 1034
\\[0.2em\\] $\\chi_1, \\chi_2, \\chi_3 ,\\chi_4 $ 487, 850, -892, 980 405, 713, -758, 829
\\[0.2em\\] $\\chi_{+}, \\chi_{-}$ 849, 980 712, 829
\\[0.2em\\] $\\tilde{g} $ 2514 2126
\\[0.2em\\]
------------------------------------------------------------------- -------------------------------------------------- --------------------------------------------------
: Sparticle spectrum at the point giving $125 {\\,\\textrm{GeV}}$ Higgs mass with the lowest $B_N$[]{data-label="table:Spectrum125"}
------------------------------------------------------------------- ------------------------------------------------- --------------------------------------------------
$(\\tan \\beta = 10, B_N = 240 {\\,\\textrm{TeV}})$ $(\\tan \\beta = 30 , B_N = 200 {\\,\\textrm{TeV}})$
\\[0.2em\\]
\\[-1.1em\\] $\\tilde{\\nu}_e, \\tilde{\\nu}_{\\mu}, \\tilde{\\nu}_{\\tau}$ 1971, 1974, 2009 1657, 1682, 1707
\\[0.2em\\] $\\tilde{e}_1, \\tilde{\\mu}_1, \\tilde{\\tau}_1$ 915, 915, 894 770, 769, 590
\\[0.2em\\] $\\tilde{e}_2, \\tilde{\\mu}_2, \\tilde{\\tau}_2$ 2010, 1976, 1970 1709, 1684, 1657
\\[0.2em\\] $\\tilde{u}_1, \\tilde{c}_1, \\tilde{t}_1$ 1937, 1937, 521 1633, 1633, 404
\\[0.2em\\] $\\tilde{d}_1, \\tilde{s}_1, \\tilde{b}_1$ 1954, 1954, 1931 1650, 1650, 1564
\\[0.2em\\] $\\tilde{u}_2, \\tilde{c}_2, \\tilde{t}_2$ 2169, 2169, 1569 1828, 1828, 1286
\\[0.2em\\] $\\tilde{d}_2, \\tilde{s}_2, \\tilde{b}_2$ 2170, 2170, 1586 1829, 1829, 1291
\\[0.2em\\] $h_0, A, H_0, H_{\\pm} $ 123, 1220, 1220, 1223 123, 679, 679, 684
\\[0.2em\\] $\\chi_1, \\chi_2, \\chi_3, \\chi_4 $ 322, 600, -729, 757 267, 466, -520, 579
\\[0.2em\\] $\\chi_{+}, \\chi_{-}$ 600, 757 465, 579
\\[0.2em\\] $\\tilde{g} $ 1737 1470
\\[0.2em\\]
------------------------------------------------------------------- ------------------------------------------------- --------------------------------------------------
: Sparticle spectrum at the point giving $123 {\\,\\textrm{GeV}}$ Higgs mass with the lowest $B_N$[]{data-label="table:Spectrum123"}
Appendix A: representations of $S_4$ symmetry and tensor products {#sec:AppA .unnumbered}
=================================================================
$S_4$ is a non-abelian discrete symmetry and consists of all permutations among four quantities. For a review, see [@Ishimori:2010au]. Irreducible representations of $S_4$ are two singlets ${\\bf 1}, {\\bf 1^\\prime}$, one singlet ${\\bf 2}$, and two triplets ${\\bf 3}, {\\bf 3^\\prime}$. Tensor products among them are given as follows: [$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2 \\\\
x_3
\\end{array}\\right)_{\\bf 3} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2 \\\\
y_3
\\end{array}\\right)_{\\bf 3}&= (x_1y_1+x_2y_2+x_3y_3)_{\\bf 1}+
\\left(
\\begin{array}{c}
x_1 y_1+\\omega x_2y_2 +\\omega^2 x_3y_3\\\\
x_1 y_1+\\omega^2 x_2y_2 +\\omega x_3y_3
\\end{array}\\right)_{\\bf 2}
\\\\
&+
\\left(
\\begin{array}{c}
x_2y_3 + x_3y_2\\\\
x_3 y_1+ x_1y_3 \\\\
x_1y_2+x_2y_1
\\end{array}\\right)_{\\bf 3}
+
\\left(
\\begin{array}{c}
x_2y_3 - x_3y_2\\\\
x_3 y_1- x_1y_3 \\\\
x_1y_2-x_2y_1
\\end{array}\\right)_{\\bf 3^\\prime} \\end{split}$$]{}
[$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2 \\\\
x_3
\\end{array}\\right)_{\\bf 3^\\prime} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2 \\\\
y_3
\\end{array}\\right)_{\\bf 3^\\prime}&= (x_1y_1+x_2y_2+x_3y_3)_{\\bf 1}+
\\left(
\\begin{array}{c}
x_1 y_1+\\omega x_2y_2 +\\omega^2 x_3y_3\\\\
x_1 y_1+\\omega^2 x_2y_2 +\\omega x_3y_3
\\end{array}\\right)_{\\bf 2}
\\\\
&+
\\left(
\\begin{array}{c}
x_2y_3 + x_3y_2\\\\
x_3 y_1+ x_1y_3 \\\\
x_1y_2+x_2y_1
\\end{array}\\right)_{\\bf 3}
+
\\left(
\\begin{array}{c}
x_2y_3 - x_3y_2\\\\
x_3 y_1- x_1y_3 \\\\
x_1y_2-x_2y_1
\\end{array}\\right)_{\\bf 3^\\prime} \\end{split}$$]{}
[$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2 \\\\
x_3
\\end{array}\\right)_{\\bf 3} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2 \\\\
y_3
\\end{array}\\right)_{\\bf 3^\\prime}&= (x_1y_1+x_2y_2+x_3y_3)_{\\bf 1^\\prime}+
\\left(
\\begin{array}{c}
x_1 y_1+\\omega x_2y_2 +\\omega^2 x_3y_3\\\\
-(x_1 y_1+\\omega^2 x_2y_2 +\\omega x_3y_3)
\\end{array}\\right)_{\\bf 2}
\\\\
&+
\\left(
\\begin{array}{c}
x_2y_3 + x_3y_2\\\\
x_3 y_1+ x_1y_3 \\\\
x_1y_2+x_2y_1
\\end{array}\\right)_{\\bf 3^\\prime}
+
\\left(
\\begin{array}{c}
x_2y_3 - x_3y_2\\\\
x_3 y_1- x_1y_3 \\\\
x_1y_2-x_2y_1
\\end{array}\\right)_{\\bf 3} \\end{split}$$]{}
[$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2
\\end{array}\\right)_{\\bf 2} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2
\\end{array}\\right)_{\\bf 2}&= (x_1y_2+x_2y_1)_{\\bf 1}+ (x_1y_2-x_2y_1)_{\\bf 1^\\prime}+
\\left(
\\begin{array}{c}
x_2y_2 \\\\
x_1 y_1
\\end{array}\\right)_{\\bf 2}
\\end{split}$$]{}
[$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2
\\end{array}\\right)_{\\bf 2} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2 \\\\
y_3
\\end{array}\\right)_{\\bf 3}=
\\left(
\\begin{array}{c}
(x_1+x_2)y_1\\\\
(\\omega^2 x_1+\\omega x_2) y_2 \\\\
(\\omega x_1 + \\omega^2 x_2)y_3
\\end{array}\\right)_{\\bf 3} +
\\left(
\\begin{array}{c}
(x_1-x_2)y_1\\\\
(\\omega^2 x_1-\\omega x_2) y_2 \\\\
(\\omega x_1 - \\omega^2 x_2)y_3
\\end{array}\\right)_{\\bf 3^\\prime} \\end{split}$$]{}
[$$\\begin{split}\\left(
\\begin{array}{c}
x_1 \\\\
x_2
\\end{array}\\right)_{\\bf 2} \\times
\\left(
\\begin{array}{c}
y_1 \\\\
y_2 \\\\
y_3
\\end{array}\\right)_{\\bf 3^\\prime}=
\\left(
\\begin{array}{c}
(x_1+x_2)y_1\\\\
(\\omega^2 x_1+\\omega x_2) y_2 \\\\
(\\omega x_1 + \\omega^2 x_2)y_3
\\end{array}\\right)_{\\bf 3^\\prime} +
\\left(
\\begin{array}{c}
(x_1-x_2)y_1\\\\
(\\omega^2 x_1-\\omega x_2) y_2 \\\\
(\\omega x_1 - \\omega^2 x_2)y_3
\\end{array}\\right)_{\\bf 3} \\end{split}$$]{} and trivially, we have ${\\bf 3} \\times {\\bf 1^\\prime} ={\\bf 3^\\prime}$, ${\\bf 3^\\prime} \\times {\\bf 1^\\prime} ={\\bf 3^\\prime}$, and ${\\bf 2} \\times {\\bf 1^\\prime} ={\\bf 2}$.
Appendix B: Remarks on the flavon vacuum stability {#sec:AppB .unnumbered}
==================================================
In [@BenTov:2012tg], it was shown that $A_4$ triplet flavon vacuum in the direction of $(1,1,1)$ and $(1,0,0)$, ($(0,1,0)$, $(0,0,1)$ are the same) is favored compared to other directions, such as $(1,1,0)$. Since $A_4$ symmetry is the subgroup of the $S_4$ composed of even permutations, similar arguments hold. In this Appendix, we argue that triplet flavon directions favored in $A_4$ model are also favored in the $S_4$ model and that $S_4$ doublet vacuum favors the $(1,1)$ direction.
Rigid SUSY makes the discussion more simple, because the potential $V$ is minimized at $\\langle V \\rangle=0$. On the other hand, extra symmetries like $Z_4$ and U(1)$_L$ more restrict possible terms in the superpotential. Suppose that U(1)$_L$ symmetry is discretized to, for example, $Z_8$ symmetry. In this case, only quartic terms $\\Phi^4$ and $\\chi^4$ are allowed. Let us assume that breaking of extra symmetries introduces quadratic term, like $m_1\\Phi^2$ or $m_2 \\chi^2$. To achieve this, let us consider ‘$Z_4$ breaking singlets’ $\\psi_1$, $\\bar{\\psi}_1$ and ‘lepton number breaking singlets’ $\\psi_2$, $\\bar{\\psi}_2$ with $S_4 \\times Z_4 \\times{\\rm U(1)}_L $ quantum numbers [$$\\begin{split}&\\psi_1: ({\\bf 1},3,0),~~~\\bar{\\psi}_1 : ({\\bf 1},1,0),
\\\\
& \\psi_2: ({\\bf 1},0,2),~~~\\bar{\\psi}_2 : ({\\bf 1},0,6).\\end{split}$$]{} They do not combine with $\\bar{E}LH_d$, $NLH_u$ and $NN$ to make singlets under all symmetries imposed. Then, they can couple to $\\Phi^2$ or $\\chi^2$ such that a superpotential is given by [$$\\begin{split}W(\\psi_1, \\bar{\\psi}_1, \\psi_2, \\bar{\\psi}_2)=&\\frac{1}{\\Lambda}[\\Phi^2\\bar{\\psi}_1\\psi_1 + \\chi^2\\bar{\\psi}_2\\psi_2]
\\\\
&-M_1 \\bar{\\psi}_1\\psi_1 +\\frac{1}{\\Lambda}[\\kappa_1(\\bar{\\psi}_1\\psi_1)^2+\\kappa_2 (\\psi_1)^4 + \\kappa_3 (\\bar{\\psi}_1)^4]
\\\\
&-M_2 \\bar{\\psi}_2\\psi_2 +\\frac{1}{\\Lambda}[\\kappa_1^\\prime(\\bar{\\psi}_2\\psi_2)^2+\\kappa_2^\\prime (\\psi_2)^4 + \\kappa_3^\\prime (\\bar{\\psi}_2)^4] .\\end{split}$$]{} In this superpotential, $\\bar{\\psi_1}{\\psi}_1$ and $\\bar{\\psi_2}{\\psi}_2$ pairs have VEVs and they provide $m_1 \\Phi^2+m_2 \\chi^2$ terms. With this setup, the triplet superpotential has the form of [$$\\begin{split}W&=mS^2+\\frac{\\lambda_1}{\\Lambda} (x^2+y^2+z^2)^2+\\frac{\\lambda_2}{\\Lambda}(x^2+\\omega y^2+\\omega^2 z^2)(x^2+\\omega^2 y^2+\\omega z^2)
\\\\
&+\\frac{\\lambda_3}{\\Lambda}(xy+yz+zx)^2\\end{split}$$]{} where $S=(x, y, z)$ represents the generic $S_4$ triplet such as $\\Phi$ or $\\chi$. Note also that the superpotential has an accidental $Z_2$ symmetry under which $\\psi_{1,2}$ and $\\bar{\\psi}_{1,2}$ are odd whereas other fields are even. If this $Z_2$ symmetry is imposed, $(\\Phi^2\\psi_1/\\Lambda^3)\\bar{E}LH_d$ and $(\\Phi^2\\psi_2/\\Lambda^2)NN$ terms, which change the flavor structure in the subleading orders are forbidden. In this case, charged lepton Yukawa coupling structure in dimension-4 operator is preserved up to dimension-6 operator whereas Majorana mass structure in dimension-3 operator is preserved up to dimension-5 operator so corrections to them are highly suppressed.
Each term of the F-term potential $V=|\\partial W/\\partial x|^2+|\\partial W/\\partial y|^2+|\\partial W/\\partial z|^2$ is given by [$$\\begin{split}&\\frac{\\partial W}{\\partial x}=mx+\\frac{4\\lambda_1}{\\Lambda}x(x^2+y^2+z^2)+\\frac{2\\lambda_2}{\\Lambda}x(2x^2-y^2-z^2)+\\frac{2\\lambda_3}{\\Lambda}(y+z)(xy+yz+zx)
\\\\
&\\frac{\\partial W}{\\partial y}=my+\\frac{4\\lambda_1}{\\Lambda}x(x^2+y^2+z^2)+\\frac{2\\lambda_2}{\\Lambda}y(2y^2-z^2-x^2)+\\frac{2\\lambda_3}{\\Lambda}(z+x)(xy+yz+zx)
\\\\
&\\frac{\\partial W}{\\partial z}=mz+\\frac{4\\lambda_1}{\\Lambda}z(x^2+y^2+z^2)+\\frac{2\\lambda_2}{\\Lambda}z(2z^2-x^2-y^2)+\\frac{2\\lambda_3}{\\Lambda}(x+y)(xy+yz+zx). \\end{split}$$]{} Stable vacuum requires that these three terms should be zero simultaneously. For vacuum $\\langle S \\rangle =v(1,1,1)$, three terms give the same condition, [$$\\begin{split}12(\\lambda_1+\\lambda_3)\\Big(\\frac{v^3}{\\Lambda}\\Big)+mv=0\\end{split}$$]{} so the vacuum is stabilized at $v^2=-m\\Lambda/[12(\\lambda_1+\\lambda_3)]$. For vacuum $\\langle S \\rangle =v(1,0,0)$, the second and third terms vanish trivially and the first term gives [$$\\begin{split}4(\\lambda_1+\\lambda_3)\\Big(\\frac{v^3}{\\Lambda}\\Big)+mv=0\\end{split}$$]{} so the vacuum is stabilized at $v^2=-m\\Lambda/[4(\\lambda_1+\\lambda_3)]$. The vacuum in the direction $(0,1,0)$ and $(0,0,1)$ gives the same result by permutational property of $S_4$. On the other hand, vacuum $\\langle S \\rangle =v(1,1,0)$ gives two conditions, [$$\\begin{split}&\\frac{v^3}{\\Lambda}(8\\lambda_1+2\\lambda_2+2\\lambda_3)+mv=0
\\\\
&\\lambda_3 v^3=0.\\end{split}$$]{} If $\\lambda_3$ is not forbidden by another symmetry, $v=0$ is the only solution and nontrivial vacuum can not be developed.
$S_4$ doublet stabilization can be discused in the same way. Renormalizable superpotential for doublet $(x,y)$ is written as [$$\\begin{split}W=m (xy)+\\lambda (x^3+y^3)\\end{split}$$]{} and stabilization condition [$$\\begin{split}&\\frac{\\partial W}{\\partial x}=2my+3\\lambda x^2=0
\\\\
&\\frac{\\partial W}{\\partial y}=2mx+3\\lambda y^2=0\\end{split}$$]{} requires that $x=y$. So the vacuum choice for Eq. (\\[eq:modYnu\\]) is stable.
Appendix C: Comment on Kähler potential corrections {#sec:AppC .unnumbered}
===================================================
In our setup, Yukawa couplings are constructed from non-renormalizable dimension-4 superpotential with several flavons. These flavons also appear in the non-renormalizable Kähler potential and kinetic terms are written in the form of [$$\\begin{split}K_{i\\bar{j}}\\partial_\\mu \\phi^{\\bar{j} \\dagger} \\partial^\\mu \\phi^i
-iK_{i\\bar{j}}\\bar{\\psi}^{\\bar j}\\bar{\\sigma}_\\mu \\partial^\\mu \\psi^i\\end{split}$$]{} where $\\phi$ and $\\psi$ represent bosonic and fermionic fields, respectively. The Kähler potential of charged lepton supermultiplet $L$ is given by [$$\\begin{split}K=\\Big[1+a_1 \\frac{\\Phi^\\dagger \\Phi}{\\Lambda^2} +a_2 \\frac{\\chi^\\dagger \\chi}{\\Lambda^2} \\Big] L^\\dagger L \\Big|_{S_4 ~{\\rm singlets}}+\\cdots\\end{split}$$]{} and similar terms can be written for other fields, $\\bar{E}^\\dagger \\bar{E}$, $N^\\dagger N$, $H_{u,d}^\\dagger H_{u,d}$, etc. Then we have quite complicate terms. For example, from $(\\Phi_{\\bf 3}^\\dagger \\Phi_{\\bf 3}/\\Lambda^2) L^\\dagger L$ where $\\Phi_{\\bf 3}$ vacuum is given by $v_2(1,1,1)$, we have [$$\\begin{split}a_1\\frac{\\Phi_{\\bf 3}^\\dagger \\Phi_{\\bf 3}}{\\Lambda^2} L^\\dagger L \\Big|_{S_4 ~{\\rm singlets}}=& a_{1,1}\\frac{v_2^2}{\\Lambda^2} (L_1^\\dagger L_1 + L_2^\\dagger L_2 + L_3^\\dagger L_3)
\\\\
&+a_{1,2}\\frac{v_2^2}{\\Lambda^2}\\Big[ L_2^\\dagger L_3 + L_3^\\dagger L_2
+ L_3^\\dagger L_1+ L_1^\\dagger L_3 + L_1^\\dagger L_2 + L_2^\\dagger L_1 \\Big].\\end{split}$$]{} Since $\\langle \\Phi \\rangle/\\Lambda=v_2/\\Lambda$ is responsible for charged lepton Yukawa couplings, we see $4\\pi v_2/\\Lambda \\gtrsim Y_\\tau=m_\\tau/[(v/\\sqrt2)\\cos\\beta] \\sim 0.1$ for $\\tan\\beta =10$. On the other hand, $\\chi_{\\bf 3}$ has another vacuum direction, $w_2(0,1,0)$. Then [$$\\begin{split}a_2\\frac{\\chi^\\dagger \\chi}{\\Lambda^2}L^\\dagger L\\Big|_{S_4 ~{\\rm singlets}}=&a_{2,1}\\frac{w_2^2}{\\Lambda^2}(L_1^\\dagger L_1 + L_2^\\dagger L_2 + L_3^\\dagger L_3)
\\\\
&+a_{2,2}\\frac{w_2^2}{\\Lambda^2}(-L_1^\\dagger L_1 + L_2^\\dagger L_2 - L_3^\\dagger L_3)\\end{split}$$]{} so it just rescales the fields. Moreover, since See-Saw scale is about $10^{14}{\\,\\textrm{GeV}}$, we have suppressed effect, $4\\pi\\chi/\\Lambda \\sim 0.01$ with $\\Lambda$ is the GUT scale. In the same way, doublet and singlet flavons in the Kähler potential just contribute to the field rescalings.
Physical fields are defined with canonical kinetic terms, so we should make field redefinitions and they affect flavor structures in principle. In our work, however, such effects are not considered by assuming small coeffecients $a_{1,2}$. For example, diagonalization of $Y_E$ demonstrated above is not affected if $a_1 ( v_2^2/\\Lambda^2) \\lesssim (m_e/m_\\tau) \\sim 3 \\times 10^{-4}$, [*i.e.*]{} $a_1 \\lesssim 3 $.
On the other hand, mixings in the Kähler potential between flavons can be dangerous. For example, kinetic mixing between flavons such as $\\bar{\\psi}_1^\\dagger \\psi_2^\\dagger \\Phi_{\\bf 3}^\\dagger \\chi_{\\bf 3}/\\Lambda^2$ can introduce small correction to $Y_E$ or $M_N$ with unwanted $S_4$ triplet vacuum direction. Such effect is suppressed by $\\bar{\\psi}_1^\\dagger \\psi_2^\\dagger/\\Lambda^2$ and can be more suppressed with tiny coefficient.
[99]{}
\\#1\\#2\\#3[Phys. Rep. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Rev. Mod. Phys. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Nucl. Phys. [**B\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Phys. Lett. [**B\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Phys. Rev. [**D\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Phys. Rev. Lett. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[ [**\\#1**]{} (\\#3) \\#2(E)]{} \\#1\\#2\\#3[JHEP [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[JCAP [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Z. Phys. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Euro. Phys. J. [**C\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Int. J. Mod. Phys. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Mod. Phys. Lett. [**A\\#1**]{} (\\#3) \\#2 ]{} \\#1\\#2\\#3[Astrophys. J. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Nature [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Sov. J. Nucl. Phys. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Astrophys. J. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Int. J. Mod. Phys. [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Mod. Phys. Lett. [**A\\#1**]{} (\\#3) \\#2 ]{} \\#1\\#2\\#3[Nature [**\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Nucl. Phys. [**B\\#1**]{} (\\#3) \\#2]{} \\#1\\#2\\#3[Prog. Theor. Phys. [**\\#1**]{} (\\#3) \\#2]{}
G. Aad [*et al.*]{} \\[ATLAS Collaboration\\], Phys. Lett. B [**716**]{}, 1 (2012) \\[arXiv:1207.7214 \\[hep-ex\\]\\]. S. Chatrchyan [*et al.*]{} \\[CMS Collaboration\\], Phys. Lett. B [**716**]{}, 30 (2012) \\[arXiv:1207.7235 \\[hep-ex\\]\\]. M. Dine and A. E. Nelson, Phys. Rev. D [**48**]{}, 1277 (1993) \\[hep-ph/9303230\\].
M. Dine, A. E. Nelson and Y. Shirman, Phys. Rev. D [**51**]{}, 1362 (1995) \\[hep-ph/9408384\\]. M. Dine, A. E. Nelson, Y. Nir and Y. Shirman, Phys. Rev. D [**53**]{}, 2658 (1996) \\[hep-ph/9507378\\]. G. F. Giudice and R. Rattazzi, Phys. Rept. [**322**]{}, 419 (1999) \\[hep-ph/9801271\\]. M. A. Ajaib, I. Gogoladze, F. Nasir and Q. Shafi, Phys. Lett. B [**713**]{}, 462 (2012) \\[arXiv:1204.2856 \\[hep-ph\\]\\]. J. L. Feng, Z. ’e. Surujon and H. -B. Yu, Phys. Rev. D [**86**]{}, 035003 (2012) \\[arXiv:1205.6480 \\[hep-ph\\]\\]. K. J. Bae, K. Choi, E. J. Chun, S. H. Im, C. B. Park and C. S. Shin, arXiv:1208.2555 \\[hep-ph\\]. S. P. Martin, Phys. Rev. D [**81**]{}, 035004 (2010) \\[arXiv:0910.2732 \\[hep-ph\\]\\].
S. P. Martin and J. D. Wells, Phys. Rev. D [**86**]{}, 035017 (2012) \\[arXiv:1206.2956 \\[hep-ph\\]\\]. K. J. Bae, T. H. Jung and H. D. Kim, arXiv:1208.3748 \\[hep-ph\\].
Z. Kang, T. Li, T. Liu, C. Tong and J. M. Yang, arXiv:1203.2336 \\[hep-ph\\].
N. Craig, S. Knapen, D. Shih and Y. Zhao, arXiv:1206.4086 \\[hep-ph\\]. Y. Shadmi and P. Z. Szabo, JHEP [**1206**]{}, 124 (2012) \\[arXiv:1103.0292 \\[hep-ph\\]\\]. A. Albaid, K. S. Babu and K. S. Babu, arXiv:1207.1014 \\[hep-ph\\]. M. Abdullah, I. Galon, Y. Shadmi and Y. Shirman, arXiv:1209.4904 \\[hep-ph\\]. J. L. Evans, M. Ibe, S. Shirai and T. T. Yanagida, Phys. Rev. D [**85**]{}, 095004 (2012) \\[arXiv:1201.2611 \\[hep-ph\\]\\]. J. L. Evans, M. Ibe and T. T. Yanagida, Phys. Lett. B [**705**]{}, 342 (2011) \\[arXiv:1107.3006 \\[hep-ph\\]\\]. M. Buican, P. Meade, N. Seiberg and D. Shih, JHEP [**0903**]{}, 016 (2009) \\[arXiv:0812.3668 \\[hep-ph\\]\\]. R. Dermisek and H. D. Kim, Phys. Rev. Lett. [**96**]{}, 211803 (2006) \\[hep-ph/0601036\\]. R. Dermisek, H. D. Kim and I. -W. Kim, JHEP [**0610**]{}, 001 (2006) \\[hep-ph/0607169\\].
K. Choi, E. J. Chun, H. D. Kim, W. I. Park and C. S. Shin, Phys. Rev. D [**83**]{}, 123503 (2011) \\[arXiv:1102.2900 \\[hep-ph\\]\\]. G. R. Dvali, G. F. Giudice and A. Pomarol, Nucl. Phys. B [**478**]{}, 31 (1996) \\[hep-ph/9603238\\]. G. F. Giudice, H. D. Kim and R. Rattazzi, Phys. Lett. B [**660**]{}, 545 (2008) \\[arXiv:0711.4448 \\[hep-ph\\]\\]. F. R. Joaquim and A. Rossi, Phys. Rev. Lett. [**97**]{}, 181801 (2006) \\[hep-ph/0604083\\]. R. N. Mohapatra, N. Okada and H. -B. Yu, Phys. Rev. D [**78**]{}, 075011 (2008) \\[arXiv:0807.4524 \\[hep-ph\\]\\]. P. Fileviez Perez, H. Iminniyaz, G. Rodrigo and S. Spinner, Phys. Rev. D [**81**]{}, 095013 (2010) \\[arXiv:0911.1360 \\[hep-ph\\]\\].
F. Borzumati and A. Masiero, Phys. Rev. Lett. [**57**]{}, 961 (1986). M. Ciuchini, A. Masiero, P. Paradisi, L. Silvestrini, S. K. Vempati and O. Vives, Nucl. Phys. B [**783**]{}, 112 (2007) \\[hep-ph/0702144 \\[HEP-PH\\]\\].
P. F. Harrison, D. H. Perkins and W. G. Scott, Phys. Lett. B [**530**]{}, 167 (2002) \\[hep-ph/0202074\\]. Y. Lin, Nucl. Phys. B [**824**]{}, 95 (2010) \\[arXiv:0905.3534 \\[hep-ph\\]\\]. H. Ishimori and E. Ma, Phys. Rev. D [**86**]{}, 045030 (2012) \\[arXiv:1205.0075 \\[hep-ph\\]\\]. G. Altarelli, F. Feruglio, L. Merlo and E. Stamou, JHEP [**1208**]{}, 021 (2012) \\[arXiv:1205.4670 \\[hep-ph\\]\\]. S. F. King, Phys. Lett. B [**718**]{}, 136 (2012) \\[arXiv:1205.0506 \\[hep-ph\\]\\]. P. Minkowski, Phys. Lett. B [**67**]{}, 421 (1977). T. Yanagida, Conf. Proc. C [**7902131**]{}, 95 (1979). T. Yanagida, Prog. Theor. Phys. [**64**]{}, 1103 (1980). M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C [**790927**]{}, 315 (1979). R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. [**44**]{}, 912 (1980). G. F. Giudice, P. Paradisi and A. Strumia, Phys. Lett. B [**694**]{}, 26 (2010) \\[arXiv:1003.2388 \\[hep-ph\\]\\]. G. F. Giudice and R. Rattazzi, Nucl. Phys. B [**511**]{}, 25 (1998) \\[hep-ph/9706540\\]. N. Arkani-Hamed, G. F. Giudice, M. A. Luty and R. Rattazzi, Phys. Rev. D [**58**]{}, 115005 (1998) \\[hep-ph/9803290\\]. Z. Chacko and E. Ponton, Phys. Rev. D [**66**]{}, 095004 (2002) \\[hep-ph/0112190\\]. D. Grossman and Y. Nir, Phys. Rev. D [**85**]{}, 055004 (2012) \\[arXiv:1111.5751 \\[hep-ph\\]\\]. G. F. Giudice and R. Rattazzi, Nucl. Phys. B [**757**]{}, 19 (2006) \\[hep-ph/0606105\\]. Y. Abe [*et al.*]{} \\[DOUBLE-CHOOZ Collaboration\\], Phys. Rev. Lett. [**108**]{}, 131801 (2012) \\[arXiv:1112.6353 \\[hep-ex\\]\\]. M. Hartz \\[T2K Collaboration\\], arXiv:1201.1846 \\[hep-ex\\]. P. Adamson [*et al.*]{} \\[MINOS Collaboration\\], Phys. Rev. Lett. [**108**]{}, 191801 (2012) \\[arXiv:1202.2772 \\[hep-ex\\]\\]. F. P. An [*et al.*]{} \\[DAYA-BAY Collaboration\\], Phys. Rev. Lett. [**108**]{}, 171803 (2012) \\[arXiv:1203.1669 \\[hep-ex\\]\\]. J. K. Ahn [*et al.*]{} \\[RENO Collaboration\\], Phys. Rev. Lett. [**108**]{}, 191802 (2012) \\[arXiv:1204.0626 \\[hep-ex\\]\\]. X. -G. He, Y. -Y. Keum and R. R. Volkas, JHEP [**0604**]{}, 039 (2006) \\[hep-ph/0601001\\]. X. -G. He and A. Zee, Phys. Lett. B [**645**]{}, 427 (2007) \\[hep-ph/0607163\\]. X. -G. He and A. Zee, Phys. Rev. D [**84**]{}, 053004 (2011) \\[arXiv:1106.4359 \\[hep-ph\\]\\]. Y. BenTov, X. -G. He and A. Zee, arXiv:1208.1062 \\[hep-ph\\].
F. Bazzocchi and L. Merlo, arXiv:1205.5135 \\[hep-ph\\]. J. Beringer [*et al.*]{} \\[Particle Data Group Collaboration\\], Phys. Rev. D [**86**]{}, 010001 (2012). G. L. Fogli, E. Lisi, A. Marrone, D. Montanino, A. Palazzo and A. M. Rotunno, Phys. Rev. D [**86**]{}, 013012 (2012) \\[arXiv:1205.5254 \\[hep-ph\\]\\]. M. C. Gonzalez-Garcia, M. Maltoni, J. Salvado and T. Schwetz, arXiv:1209.3023 \\[hep-ph\\].
J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi, Phys. Rev. D [**53**]{}, 2442 (1996) \\[hep-ph/9510309\\]. E. Arganda and M. J. Herrero, Phys. Rev. D [**73**]{}, 055003 (2006) \\[hep-ph/0510405\\]. J. L. Hewett, H. Weerts, R. Brock, J. N. Butler, B. C. K. Casey, J. Collar, A. de Govea and R. Essig [*et al.*]{}, arXiv:1205.2671 \\[hep-ex\\]. J. Adam [*et al.*]{} \\[MEG Collaboration\\], Phys. Rev. Lett. [**107**]{}, 171801 (2011) \\[arXiv:1107.5547 \\[hep-ex\\]\\]. B. Aubert [*et al.*]{} \\[BABAR Collaboration\\], Phys. Rev. Lett. [**104**]{}, 021802 (2010) \\[arXiv:0908.2381 \\[hep-ex\\]\\]. U. Bellgardt [*et al.*]{} \\[SINDRUM Collaboration\\], Nucl. Phys. B [**299**]{}, 1 (1988). K. Hayasaka, K. Inami, Y. Miyazaki, K. Arinstein, V. Aulchenko, T. Aushev, A. M. Bakich and A. Bay [*et al.*]{}, Phys. Lett. B [**687**]{}, 139 (2010) \\[arXiv:1001.3221 \\[hep-ex\\]\\]. C. Dohmen [*et al.*]{} \\[SINDRUM II. Collaboration\\], Phys. Lett. B [**317**]{}, 631 (1993). W. H. Bertl [*et al.*]{} \\[SINDRUM II Collaboration\\], Eur. Phys. J. C [**47**]{}, 337 (2006). B. O’Leary [*et al.*]{} \\[SuperB Collaboration\\], arXiv:1008.1541 \\[hep-ex\\]. A. Blondel [*et al.*]{}, http://www.psi.ch/mu3e/DocumentsEN/LOI\\_Mu3e\\_PSI.pdf
The PRIME working group, unpublished; LOI to J-PARC 50-GeV PS, LOI-25, http://www-ps.kek.jp/jhf-np/LOIlist/pdf/L25.pdf
A. Abada, D. Das, A. Vicente and C. Weiland, JHEP [**1209**]{}, 015 (2012) \\[arXiv:1206.6497 \\[hep-ph\\]\\]. J. L. Lopez, D. V. Nanopoulos and X. Wang, Phys. Rev. D [**49**]{}, 366 (1994) \\[hep-ph/9308336\\]. U. Chattopadhyay and P. Nath, Phys. Rev. D [**53**]{}, 1648 (1996) \\[hep-ph/9507386\\]. M. Hirsch, F. Staub and A. Vicente, Phys. Rev. D [**85**]{}, 113013 (2012) \\[arXiv:1202.1825 \\[hep-ph\\]\\]. R. Fok and G. D. Kribs, Phys. Rev. D [**82**]{}, 035010 (2010) \\[arXiv:1004.0556 \\[hep-ph\\]\\].
R. Kitano, M. Koike and Y. Okada, Phys. Rev. D [**66**]{}, 096002 (2002) \\[Erratum-ibid. D [**76**]{}, 059902 (2007)\\] \\[hep-ph/0203110\\]. G. W. Bennett [*et al.*]{} \\[Muon G-2 Collaboration\\], Phys. Rev. D [**73**]{}, 072003 (2006) \\[hep-ex/0602035\\].
K. Hagiwara, R. Liao, A. D. Martin, D. Nomura and T. Teubner, J. Phys. G [**38**]{}, 085003 (2011) \\[arXiv:1105.3149 \\[hep-ph\\]\\].
T. Moroi, Phys. Rev. D [**53**]{}, 6565 (1996) \\[Erratum-ibid. D [**56**]{}, 4424 (1997)\\] \\[hep-ph/9512396\\]. G. -C. Cho, K. Hagiwara, Y. Matsumoto and D. Nomura, JHEP [**1111**]{}, 068 (2011) \\[arXiv:1104.1769 \\[hep-ph\\]\\]. J. Hisano and K. Tobe, Phys. Lett. B [**510**]{}, 197 (2001) \\[hep-ph/0102315\\].
H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada and M. Tanimoto, Prog. Theor. Phys. Suppl. [**183**]{}, 1 (2010) \\[arXiv:1003.3552 \\[hep-th\\]\\].
|
Q:
Why do I get a syntax error when I use "given"?
I'm trying to run the following code:
foreach my $k (keys %rec) {
#switch for watchlist figures
given ($k) { #line 93
# Code omitted
when ("p") { #positive breakout
if ($row{cls} > $rec{$k}) {
$email .= "positive";
} # line 104
}
when ("n") { #negative breakout
if ($row{cls} < $rec{$k}) { #line 107
But I get a syntax error:
syntax error at check_watch.pl line 93, near ") {"
syntax error at check_watch.pl line 104, near "}"
Unterminated <> operator at check_watch.pl line 107.
Why?
A:
given and when are only available when either use feature "switch"; or use v5.10; (or some later value) is in effect. Place one of those lines at the top of your source file.
A:
It is because it is not on by default. Add the line:
use 5.10.1;
to the code
A:
Because pre-5.10 versions of Perl 5 don't have the given or when keywords, you're allowed to define custom functions with those names, which of course would have different syntax. To avoid breaking backward-compatibility with programs that do that, given and when are only enabled if you specifically ask for them, by putting either
use 5.010;
or
use feature 'switch';
at the top of a lexical scope you want to use the keywords in. In addition, the semantics keep changing. For example, given was originally designed to use lexical $_ by default, but lexical $_ turns out to be a really poor fit for Perl 5, an issue they're still revising. At some point, given stopped lexicalizing $_, but of course that's a backward-incompatible change. when is (mainly) designed to use ~~, but that operator had very complicated semantics in 5.10; they've been revised once and there are plans to revise them again. (This is why the Perl 5 developers have decided to just mark all new features as 'experimental' when they're first included in a development release).
Because they are experimental, to use them without warnings you also need to include:
no if $] >= 5.017011, warnings => 'experimental::smartmatch';
or add a 5.18 dependency and use
no warnings 'experimental::smartmatch';
, or add a dependency on the experimental CPAN module and use
use experimental 'smartmatch';
.
Now, on the other hand, the Perl 5 developers (of course) need people to use given, when, and ~~ in non-production-critical (or thoroughly unit-tested!) code and give them feedback. So definitely do use them if you can.
|
Emiliano Garré
Emiliano Garré (born November 10, 1981 in Buenos Aires, Argentina) is an Argentine
footballer currently playing for Luján of the Primera C in Argentina.
Personal life
He is the second son of the Argentine coach Oscar Garré and is the brother of Argentine footballer Ezequiel Garré.
Teams
Huracán 1996-1998
Campomaiorense 1999
Huachipato 2000-2002
Audax Italiano 2003
Chacarita Juniors 2004-2005
Luján 2006–present
References
Profile at BDFA
Profile at Futbol XXI
Category:1981 births
Category:Living people
Category:Argentine footballers
Category:Argentine expatriate footballers
Category:Chacarita Juniors footballers
Category:Club Atlético Huracán footballers
Category:Huachipato footballers
Category:Audax Italiano footballers
Category:S.C. Campomaiorense players
Category:Chilean Primera División players
Category:Argentine Primera División players
Category:Expatriate footballers in Chile
Category:Expatriate footballers in Portugal
Category:Association footballers not categorized by position |
Q:
If I install node.js on my server do I need to install Ubuntu or other server OS?
Question If I install node.js on my server do I need to install Ubuntu or another server OS?
Background: I'm creating a Droplet on Digital Ocean. I am going to use the droplet to host a website that will have a Discourse powered forum. When I create the droplet I have several options, Ubuntu, FreeBSD, Fedora, Debian, CoreOS and CentOS. I also have the option to install none of these. Should I install one of these OS? Or alternately does node.js do a good enough job by itself? Being new to node.js do I need something like NPM?
Bonus Point: Although I know HTML/CSS/Javascript I've always used GoDaddy in the past and I've never had to set up a server before.
A:
node.js is not an operating system. You need something, be it Linux (Fedora, Ubuntu, etc) or Windows or BSD, but you probably just want to start with Ubuntu.
See this post for a good list of resources on getting started: https://stackoverflow.com/questions/2353818/how-do-i-get-started-with-node-js
|
---
abstract: |
The holographic Weyl anomaly for GJMS operators (or conformal powers of the Laplacian) are obtained in four and six dimensions. In the context of AdS/CFT correspondence, free conformal scalars with higher-derivative kinetic operators are induced by an ordinary second-derivative massive bulk scalar. At one-loop quantum level, the duality dictionary for partition functions entails an equality between the functional determinants of the corresponding kinetic operators and, in particular, it provides a holographic route to their Weyl anomalies. The heat kernel of a single bulk massive scalar field encodes the Weyl anomaly (type-A and type-B) coefficients for the whole tower of GJMS operators whenever they exist, as in the case of Einstein manifolds where they factorize into product of Laplacians.\\
While a holographic derivation of the type-A Weyl anomaly was already worked out some years back, in this note we compute holographically (for the first time to the best of our knowledge) the type-B Weyl anomaly for the whole family of GJMS operators in four and six dimensions. There are two key ingredients that enable this novel holographic derivation that would be quite a daunting task otherwise: (i) a simple prescription for obtaining the holographic Weyl anomaly for higher-curvature gravities, previously found by the authors, that allows to read off directly the anomaly coefficients from the bulk action; and (ii) an implied WKB-exactness, after resummation, of the heat kernel for the massive scalar on a Poincaré-Einstein bulk metric with an Einstein metric on its conformal infinity.\\
The holographically computed Weyl anomaly coefficients are explicitly verified on the boundary by exploiting the factorization of GJMS operators on Einstein manifolds and working out the relevant heat kernel coefficient.\\
address:
- '${\\S} $ Departamento de Matemática y Física Aplicadas, Universidad Católica de la Santísima Concepción, Alonso de Ribera 2850, Concepción, Chile'
- '${\\dag} $ Departamento de Ciencias Fisicas, Universidad Andres Bello, Autopista Concepcion-Talcahuano 7100, Talcahuano, Chile'
author:
- 'F. Bugini $^{\\S}$ and D.E. Diaz $^{\\dag}$'
title: 'Holographic Weyl anomaly for GJMS operators: one Laplacian to rule them all'
---
\\
Introduction
============
\\
Conformal powers of the Laplacian $P_{2k}$ (or GJMS operators for short [@GJMS92]) are higher-derivative generalizations of the conformal Laplacian or Yamabe operator of the form $$P_{2k}=\\Delta^k+LOT$$ with principal part given by an integer power of the Laplacian and complemented by lower order (in derivative) terms (LOT) built up out of the Ricci tensor and covariant derivatives. They first arose within the general Fefferman-Graham program [@FG85] induced by the $k$-th power of the ambient Laplacian $\\tilde{\\Delta}^k$ and allowed Branson’s characterization of the Q-curvature in general even dimensions as given by their zeroth order term[^1] [@Bra93; @Bra95].\\
In the alternative Fefferman-Graham formulation where the ambient metric is traded by a Poincaré-Einstein metric in one dimension lower, the conformal structures are realized on the conformal boundary at infinity. This latter approach, that provides geometric roots for the celebrated AdS/CFT correspondence in physics [@Malda; @GKP98; @Wit98], leads to a description of GJMS operators as residues of the scattering operator (aka two-point correlation function in CFT phraseology) as established by Graham and Zworski [@GZ03]. The (critical) Q-curvature also arises in this context in connection with the volume asymptotics of the Poincaré-Einstein metric. When the dimensionality of the conformal boundary is odd, the renormalized volume is related to the bulk integral of the Q-curvature via the Chern-Gauss-Bonnet formula [@A01; @Albin:2005qka; @Chang:2005ska]; when the dimensionality of the conformal boundary is even, in turn, the boundary integral of the Q-curvature is the volume anomaly or, equivalently, the renormalized volume is the conformal primitive of the Q-curvature [@GZ03; @HS98; @Gra99].
Now, it was in the study of functional determinants of conformally invariant differential operators, such as the GJMS operators, where the Q-curvature made its first appearance [@BO91]. The infinitesimal variation of the determinant under a conformal (or Weyl) rescaling of the metric reveals the conformal (or Weyl or trace) anomaly; whereas the corresponding finite variation, i.e. its conformal primitive, leads to generalized Polyakov formulas [@Pol81]. The Q-curvature arose in this context as a particular combination of local curvature invariants with a linear transformation law under conformal rescaling of the metric, playing the analog role of the Gaussian curvature on surfaces. Graham [@Gra99] already noticed that the conformal invariance properties of the renormalized volume of a Poincaré-Einstein metric are reminiscent of those for the functional determinants of conformally invariant differential operators, e.g. conformal Laplacian and higher-order GJMS operators, being conformal invariant in odd dimensions but having an anomaly in even dimensions and, on the other hand, those for the volume anomaly are similar to those for the constant term in the expansion of the integrated heat kernel for the conformally invariant differential operator, which vanishes in odd dimensions but in even dimensions is a conformal invariant obtained by integrating a local expression in curvature, namely the conformal anomaly.\\
Remarkably, a ‘holographic formula’ stemming from AdS/CFT heuristics[^2] provided a direct link between the renormalized volume of the (d+1)-dimensional bulk Poincaré-Einstein metric and functional determinants on the d-dimensional conformal boundary $$\\frac{\\det_{-}[-\\nabla^2+m^2]}{\\det_{+}[-\\nabla^2+m^2]}\\bigg{|}_{bulk}
=\\det\\,\\langle O_{\\lambda} O_{\\lambda}\\rangle\\bigg{|}_{bndry}~$$\\
The bulk side contains the one-loop effective action for a massive scalar computed with the resolvent and spectral parameter $\\lambda_+=\\frac{d}{2}+\\nu$ and its analytic continuation to $\\lambda_-=\\frac{d}{2}-\\nu$. The boundary counterpart contains the functional determinant of the two-point function of the dual boundary operator $O_{\\lambda}$, a nonlocal integral kernel corresponding to the scattering operator for the radial propagation in the bulk interior. The relation between bulk mass of the scalar field and boundary scaling dimension is, according to the AdS/CFT dictionary, given by $m^2=-\\frac{d^2}{4}+\\nu^2$. The formula originated in an attempt to compute an $O(1)$ correction to the partition function under the renormalization group (RG) flow triggered by a boundary double-trace deformation [@Gubser:2002zh; @Gubser:2002vv; @Hartman:2006dy; @Diaz:2007an]. The residues of the scattering operator at its poles become conformally invariant differential operators that in the case of the bulk massive scalar field[^3] ($\\nu\\rightarrow k$, $k=1,2,3,...$) correspond to the family of GJMS operators $P_{2k}$ $$\\frac{\\det_{-}[-\\nabla^2-\\frac{d^2}{4}+k^2]}{\\det_{+}[-\\nabla^2-\\frac{d^2}{4}+k^2]}\\bigg{|}_{bulk}
=\\det\\, P_{2k}\\bigg{|}_{bndry}~$$\\
In the conformal class of round metrics on the spheres, the similarities noticed before get promoted to a full-fledged equality because on the bulk side the volume of Euclidean AdS (or hyperbolic space) factorizes in the effective action due to its homogeneity[^4]. In this way, for even d, a Polyakov formula for the determinant of the GJMS operators was ‘holographically’ obtained [@Diaz:2008hy] and, perhaps more importantly, the two chief roles of the Q-curvature were directly connected. In particular, a compact formula for the type-A Weyl anomaly coefficient was obtained[^5] from the bulk Green’s function (or resolvent) at coincident points.
A subsequent extension of this clean entry of the AdS/CFT dictionary beyond conformal flatness has remained stalled ever since. Two main obstacles become readily apparent. One is the absence of a viable holographic route to compute the type-B Weyl anomaly in higher-derivative gravities; this is to be contrasted with the simple prescription of evaluating the bulk action at the AdS background to obtain the type-A Weyl anomaly [@ISTY99]. Second, powers of the Weyl tensor and its derivatives will appear in the heat kernel coefficients to all orders; this is again to be contrasted with the well-known WKB-exactness of the heat kernel in the AdS background [@Camporesi90; @Grigorian98; @Gopakumar:2011qs] that leaves only the first few terms after resummation.
It is the aim of this note to show how these difficulties can be overcome and to present a holographic derivation of both type-A and type-B Weyl anomaly coefficients for the whole family of GJMS operators in four and six dimensions. We start in Section 2 by first going to a generic compact Einstein manifold on the boundary, exploiting the factorization of GJMS operators into Laplacians, and computing the constant term of their heat kernel expansion in four and six dimensions so as to have the Weyl anomaly beforehand. Section 3 is devoted to the main contribution of this paper, namely the holographic derivation of the Weyl anomaly by considering the heat kernel of the bulk scalar in the corresponding bulk Poincaré-Einstein metric and the resummation that must occur in order to meet the (by now expected) central charges. In the conclusion, Section 4, we summarize and discuss our results. In Appendix A we provide more details about the WKB-exactness and the resummation properties of the bulk scalar heat kernel on the relevant Poincaré-Einstein metric.\\
Weyl anomaly for GJMS: take I
=============================
\\
Let us start by examining the GJMS operators on an even d-dimensional compact manifold where the very existence of the “supercritical” ones, i.e. $P_{2k}$ with $k>d/2$, is not granted in general. Even if they exist, as in the case of Einstein manifolds, their higher-derivative nature precludes the use of standard heat kernel methods. In the conformal class of round spheres, nevertheless, Branson’s factorization of GJMS operators into product of Laplacians [@Bra95] comes to rescue and the type-A Weyl anomaly coefficient can be worked out either by adding the constant terms of the heat expansion for the individual Laplacians or by zeta function regularization [@Dowker:2010qy].\\
In going beyond the conformally flat class of round metrics on the spheres, as required to access the type-B Weyl anomaly, the leap forward we need is facilitated by Gover’s remarkable extension of the factorization of GJMS operators to the more general case of Einstein manifolds [@Gover06]\\
\\
starting ($i=0$) with the conformal Laplacian or Yamabe operator $$Y=-\\nabla^2 + \\frac{d-2}{4(d-1)}R$$ The contribution of each Laplacian to the functional determinant, and to the anomaly, can then be computed with standard heat kernel techniques. In addition, as it has already been noticed and successfully put into use [@Bugini:2016nvn; @Beccaria:2017dmw; @Acevedo:2017vkk], although the Einstein condition brings in many simplifications, the curvature invariants that enter the type-B Weyl anomaly remain independent and their coefficients can be efficiently obtained by this shortcut route.
\\
Factorization and heat kernel at 4D: two birds, one stone
---------------------------------------------------------
\\
As explained before, a direct way to work out the Weyl anomaly for the GJMS operators is to exploit their factorization on a generic compact Einstein manifold, look for the relevant heat kernel coefficient for each individual factor and then add them all. We will need then the $b_4$ heat coefficient for each of the “shifted Laplacians” in the product
$$P_{2k}=\\prod_{i=0}^{k-1}\\left[-\\nabla^2+\\frac{(2+i)(1-i)}{12}R\\right]$$
\\
Each shifted Laplacian has the form $-\\nabla^2-E$, where $E$ is an endomorphism (see e.g. [@BFT00] for details) and it is straightforward to get the heat coefficient restricted to the Einstein metric
$$b_4^{(i)}=\\left(\\frac{i^2(i+1)^2}{288}-\\frac{1}{2160}\\right)R^2+\\frac{1}{180}W^2$$
\\
Now we simply have to add up the contributions of the individual Laplacians to get the Weyl anomaly for the 4D GJMS operators
$$\\label{boundary-4D}
\\mathcal{A}_4[P_{2k}]=\\sum_{i=0}^{k-1}b_{4}^{(i)}=\\left(\\frac{k^5}{240}-\\frac{k^3}{144}\\right)\\frac{R ^2}{6}+\\frac{k}{180}W^2$$
\\
Then, regarding the Weyl anomaly basis in 4D, one can trade the Euler density $E_4$ by the Q-curvature ${\\mathcal Q}_4$ (type-A) and maintain the Weyl tensor squared $W^2\\equiv W_{abcd}W^{abcd}$ which is the obvious independent Weyl-invariant local curvature combination (type-B). The full information on $a$ and $c$ can be gained at one go [^6] by considering the generic Einstein metric $g_{_E}$ , since then the Q-curvature reduces to a multiple of the Ricci scalar squared, ${\\mathcal Q}_4=R^2/24$, and the Weyl tensor-squared remains unchanged; therefore we have the following rewriting [@Bugini:2016nvn]
$$\\begin{aligned}
\\mathcal{A}_4 &=&-a\\,E_4 \\,+\\,c\\,W^2\\\\\\nonumber
\\\\
\\nonumber
&=&-4 a\\,{\\mathcal Q}_4 \\,+\\,(c-a)\\,W^2 \\nonumber\\\\\\nonumber
\\\\
\\nonumber
&=&- a\\,R^2/6 \\,+\\,(c-a)\\,W^2 \\nonumber\\end{aligned}$$
\\
Comparing the above relation with the accumulated heat coefficient of the “shifted Laplacians”, we finally obtain the Weyl anomaly coefficients for the whole GJMS family in 4D
\\
\\
Two remarks are worth mentioning here. First, the quintic polynomial $a_k$ follows as well from the generic expression found in [@Diaz:2008hy] and corroborated by explicit zeta regularization in [@Dowker:2010qy]. Second, only the shifted type-B anomaly coefficient turns out to be linear in $k$ and, in consequence, meets the holographic expectation of [@Mansfield:1999kk; @Mansfield:2003gs] on Ricci-flat backgrounds.\\
Factorization and heat kernel at 6D: four birds, one stone
----------------------------------------------------------
\\
In 6D, we follow the same procedure as in 4D. The factorization of the GJMS operators in terms of “shifted Laplacians” is now given by
$$P_{2k}=\\prod_{i=0}^{k-1}\\left(-\\nabla^2+\\frac{(3+i)(2-i)}{30}R\\right)$$
The endomorphism term is $E=-\\frac{(3+i)(2-i)}{30}R$ and we denote $d_i=\\frac{(3+i)(2-i)}{30}$. The relevant heat-kernel coefficient of the individual Laplacians can be worked out (see e.g. [@BFT00]) and the raw result on a 6D Einstein metric, modulo trivial total derivatives, reads [^7]
$$\\begin{aligned}
&b^{(i)}_{6}=& -\\frac{d_i^3}{6}R ^3+\\frac{d_i^2}{12}R ^3-d_i\\left(\\frac{1}{180}RRiem^2-\\frac{1}{180}RRic^2+\\frac{1}{72}R^3\\right)\\\\\\nonumber
\\\\
\\nonumber
&&+\\frac{1}{7!}\\left(-3|\\nabla Riem|^2+\\frac{44}{9}Riem ^3 - \\frac{80}{9}Riem'^3-\\frac{16}{3}RicRiem^2\\right. \\\\\\nonumber
\\\\
\\nonumber
&&\\left.+\\frac{14}{3}RRiem^2-\\frac{8}{3}RiemRic^2+\\frac{8}{9}Ric^3-\\frac{14}{3}RRic^2+\\frac{35}{9}R^3\\right)\\\\\\nonumber\\end{aligned}$$
\\
On the Einstein metric there is a lot of simplifications: the Cotton tensor, the Bach tensor and the traceless part of the Ricci tensor all vanish. Nonetheless, the type-A and the three type-B terms remain independent [@Bugini:2016nvn]. We keep a generic 6D Einstein boundary metric $g_{_E}$ so that the Einstein condition reduces the Q-curvature to a multiple of the Ricci scalar cubed, ${\\mathcal Q}_6=R^3/225$; the two cubic contractions of the Weyl tensor, denoted by $I_1=W'^{\\,3}$ and $I_2=W^3$, remain unchanged; while the third Weyl invariant reduces to $I_3=W\\nabla^2W - \\frac{8}{15} R\\,W^2$ modulo the trivial total derivative $\\frac{3}{2}\\nabla^2W^2$ (see e.g. [@Osborn:2015rna]) that we omit in what follows. The 6D Weyl anomaly can then be casted in the following convenient form
$$\\begin{aligned}
{\\mathcal A}_6&=&-{a}\\,E_6\\,+\\,{c_1}\\,I_1\\,+\\,{c_2}\\,I_2\\,+\\,{c_3}\\,I_3 \\\\
\\nonumber\\\\\\nonumber
\\qquad\\quad&=&-48\\,a\\,{\\mathcal Q}_6+(c_1-96a)I_1+(c_2-24a)I_2+(c_3+8a)I_3\\\\
\\nonumber\\\\\\nonumber
\\qquad\\quad&=&-16\\,a\\,R^3/75+(c_1-96a)I_1+(c_2-24a)I_2+(c_3+8a)I_3\\end{aligned}$$
$$\\begin{array}{|r c| c| c| c| c| c|} \\hline
& \\mbox{Curvature invariant } & {\\mathcal Q}_6=R^3/225 & I_1 & I_2 & I_3 \\\\
\\hline {A}_{10}\\quad\\vline & {R}^{\\,3} &225 & -&- &- \\\\
\\hline {A}_{11}\\quad\\vline & {R}{R}ic^{\\,2} & 75/2 & -&- &- \\\\
\\hline {A}_{12}\\quad\\vline & {R}{R}iem^{\\,2} &15 &20 &-5 &-5 \\\\
\\hline {A}_{13}\\quad\\vline & {R}ic^{\\,3} & 25/4& -& -& -\\\\
\\hline {A}_{14}\\quad\\vline & {R}iem \\, {R}ic^{\\,2} &25/4 & -&- &- \\\\
\\hline {A}_{15}\\quad\\vline & {R}ic \\, {R}iem^{\\,2} & 5/2 &10/3 &-5/6 &-5/6 \\\\
\\hline {A}_{16}\\quad\\vline & {R}iem^{\\,3} & 1 & 4 & 0& -1 \\\\
\\hline {A}_{17}\\quad\\vline & -{R}iem'^{\\,3} &1 & -2& 1/4&1/4 \\\\
\\hline {A}_{5}\\quad\\vline & |{\\nabla}{R}iem|^{2} &- &-32/3 &8/3 &5/3 \\\\
\\hline
\\end{array}$$\\
Making use of the table above to go to the standard anomaly basis and adding up the heat coefficients of the individual Laplacians (tedious but straightforward) we end up with $$\\begin{aligned}
\\label{boundary-6D}
\\qquad
7! \\; {\\mathcal A}_6[P_{2k}]&=&7! \\sum_{i=0}^{k-1} b_6^{(i)}\\\\
\\nonumber\\\\
&=& -\\frac{16}{75} \\left(\\frac{-3k^7+21k^5-28k^3}{144}\\right)R^3\\nonumber\\\\\\nonumber\\\\
&&+\\frac{14(k^3-k)}{9}\\left(4I_{1}-I_{2}-I_{3}\\right)- \\frac{k}{9}\\left(24I_{1}-30I_2-13I_{3}\\right)\\nonumber\\end{aligned}$$\\
From this expression for the accumulated heat coefficients for the shifted Laplacians we finally read off the 6D Weyl anomaly for the whole GJMS tower
\\
\\
Again, two remarks are in order here. First, the polynomial $a_k$ follows also from the generic expression found in [@Diaz:2008hy; @Dowker:2010qy]. Second, on Ricci-flat backgrounds the Q-curvature vanishes and $I_3=4\\,I_1-I_2$ so that the combined coefficients in front of the two independent Weyl invariant, say $I_1$ and $I_2$, turn out to be linear in $k$, as can be readily verified, and therefore agree with the holographic expectation of [@Mansfield:2003bg] (see, also, [@Liu:2017ruz]).\\
Weyl anomaly for GJMS: take II
==============================
\\
Let us now turn to our main thrust and try to elucidate the way in which the information on the Weyl anomaly is encoded in the “hologram”, namely the bulk massive scalar. We proceed in two steps. First, we consider the holographic formula for a bulk Poincaré-Einstein metric with the Einstein metric of before on the boundary conformal class, following the prescription put forward in [@Bugini:2016nvn] that allows to read off the Weyl anomaly coefficient in higher-curvature gravities.
$$\\nonumber
\\hat{g}_{_{PE}}=\\frac{dx^2+(1-\\lambda x^2)^2g_{_E}}{x^2}$$
with $\\lambda=\\frac{R}{4d(d-1)}$ proportional to the boundary Ricci scalar.\\
At first sight this seems to be of little help because the heat kernel coefficients, in particular those depending on the nonvanishing Weyl tensor, will be present to all orders so that there will be infinitely many higher-curvature terms in the bulk one-loop effective action.\\
In a second step, and despite the above caveat, we compute the Weyl content of the first few heat coefficients. With this partial information at hand and under the crucial assumption of WKB-exactness after resummation, we are able to correctly reproduce the Weyl anomaly coefficients for the whole tower of GJMS in four and in six dimensions, as explained in what follows.\\
Holographic derivation from 5 to 4 dims
---------------------------------------
\\
We consider therefore the holographic formula in the above Poincaré-Einstein metric on the bulk and the corresponding generic compact Einstein metric on the boundary $$\\frac{Z^{^{(-)}}_{_{\\text{MS}}}}{Z^{^{(+)}}_{_{\\text{MS}}}}\\bigg{|}_{_{PE}}\\,=\\,Z_{_{\\text{GJMS}}}\\bigg{|}_{E}$$ with the bulk one-loop effective action given by the functional determinants of the massive scalar field[^8] $$\\begin{aligned}
Z^{^{(+)}}_{_{\\text{MS}}}\\bigg{|}_{_{PE}}\\,=\\,\\left[ \\det\\left\\{-\\hat{\\nabla}^{2}+m_k^2\\right\\}\\right]^{-1/2}\\end{aligned}$$\\
We first recall the WKB-exact heat expansion in $AdS_5$ [@Camporesi90; @Grigorian98; @Gopakumar:2011qs]. Although there are infinitely many heat kernel coefficients, after factorization of the exponential factor $e^{-4t}$ only the first two remain in five dimensions
$$\\begin{aligned}
\\mbox{massive scalar $m_k^2=k^2-4$: \\qquad tr}\\,e^{\\{\\hat{\\nabla}^{2}-k^2 +4\\}t}\\bigg{|}_{_{AdS_5}}\\,=\\, \\frac{1+\\frac{2}{3}t }{(4\\pi t)^{5/2}}~e^{-k^2t}\\end{aligned}$$
\\
We need now to depart from $AdS_5$ and determine the pure-Weyl content of the heat kernel on the Poincaré-Einstein metric. The first contribution arises with $\\hat{b}_4$
$$\\begin{aligned}
\\hat{b}_4&\\sim \\frac{1}{180} \\,\\hat{W}^2\\end{aligned}$$
\\
The relevant terms in the next heat coefficient $\\hat{b}_6$ are the following
$$\\begin{aligned}
&\\hat{b}_6\\sim &\\frac{1}{7!}\\left(-3|\\hat{\\nabla}\\hat{R}iem|^2+\\frac{44}{9}\\hat{R}iem ^3 - \\frac{80}{9}\\hat{R}iem'^3-\\frac{16}{3}\\hat{R}ic\\hat{R}iem^2\\right. \\\\\\nonumber
\\\\
\\nonumber
&&\\left.+\\frac{14}{3}\\hat{R}\\hat{R}iem^2-\\frac{8}{3}\\hat{R}iem\\hat{R}ic^2+\\frac{8}{9}\\hat{R}ic^3-\\frac{14}{3}\\hat{R}\\hat{R}ic^2+\\frac{35}{9}\\hat{R}^3\\right)\\end{aligned}$$
\\
We now follow the prescription of [@Bugini:2016nvn] and go to the particular basis of Weyl invariants given by two independent cubic contractions, $\\hat{W}^3$ and $\\hat{W}'^3$, and the third one given by the 5D Fefferman-Graham invariant $\\hat{\\Phi}_5=|\\nabla \\hat{W}|^2-8\\hat{W}^2$
$$\\begin{aligned}
&\\hat{b}_6\\sim & -\\frac{1}{45} \\,\\hat{W}^2 - \\frac{1}{7!}\\left(\\, \\frac{80}{9}\\,\\hat{W}'^3 -\\, \\frac{44}{9}\\,\\hat{W}^3+\\,3\\,\\hat{\\Phi}_5\\right)\\end{aligned}$$
\\
We tabulate the dictionary below for convenience[^9].
$$\\begin{array}{|r c| c| c| c| c|} \\hline
& \\mbox{Curvature invariant } & \\hat{\\mathit{W}}^2 & \\hat{W}'^{\\,3}& \\hat{W}^{3} & \\hat{\\Phi}_5 \\\\
\\hline \\widehat{A}_{10}\\quad\\vline & \\widehat{R}^{\\,3} & -& -&- &- \\\\
\\hline \\widehat{A}_{11}\\quad\\vline & \\widehat{R}\\widehat{R}ic^{\\,2} & -& -&- &- \\\\
\\hline \\widehat{A}_{12}\\quad\\vline & \\widehat{R}\\widehat{R}iem^{\\,2} & -20 & -& -& -\\\\
\\hline \\widehat{A}_{13}\\quad\\vline & \\widehat{R}ic^{\\,3} & -& -& -& -\\\\
\\hline \\widehat{A}_{14}\\quad\\vline & \\widehat{R}iem \\, \\widehat{R}ic^{\\,2} & -&- &- &- \\\\
\\hline \\widehat{A}_{15}\\quad\\vline & \\widehat{R}ic \\, \\widehat{R}iem^{\\,2} & -4 & -& -& -\\\\
\\hline \\widehat{A}_{16}\\quad\\vline & \\widehat{R}iem^{\\,3} & -6 & -&1 &- \\\\
\\hline \\widehat{A}_{17}\\quad\\vline & -\\widehat{R}iem'^{\\,3} & 3/2& -1 & - &- \\\\
\\hline \\widehat{A}_{5}\\quad\\vline & |\\hat{\\nabla}\\widehat{R}iem|^{2} &8 &- &- &1 \\\\
\\hline
\\end{array}$$\\
After the dust has settled, we realize then that the $-1/45 \\hat{W}^2$ in $\\hat{b}_6$ can absorbed by the $e^{-4t}$ factor that makes the resummation of the pure-Ricci terms and results in the well-known WKB-exactness of the heat kernel expansion in odd-dimensional hyperbolic space. The remaining Weyl invariant terms in $\\hat{b}_6$ do not contribute to the holographic anomaly. Assuming that this WKB-exactness extends to the $\\hat{W}^2$ term, the contribution of the one-loop effective Lagrangian of the massive bulk scalar to the holographic Weyl anomaly comes exclusively from the following combination of pure-Ricci (numbers since we set the radius of the asymptotic hyperbolic metric to unity) and pure-Weyl pieces\\
$$\\begin{aligned}
\\int_{0}^{\\infty}\\frac{dt}{t^{7/2}}e^{-k^2t}\\left\\{1 + \\frac{2}{3}t + \\frac{1}{180}\\hat{W}^2 t^2 + ...\\right\\}\\end{aligned}$$\\
where the ellipsis stands for higher curvature pure-Weyl invariants that do not contribute to the 4D holographic Weyl anomaly. After proper time integration we obtain for the one-loop effective Lagrangian (modulo an overall normalization factor that can be easily worked out)\\
$$\\begin{aligned}
\\mathcal{L}^{^{(\\text{GJMS})}}_{\\text{1-loop}}=\\,& \\frac{4}{3}\\left(\\frac{k^5}{5}-\\frac{k^3}{3}\\right)\\cdot\\hat{1} + \\frac{k}{180}\\cdot\\hat{W}^2 + ...\\end{aligned}$$ The holographic recipe [@Bugini:2016nvn] tells us then how to read the anomaly: the volume part (pure-Ricci) $\\hat{1}$ ‘descends’ to the 4D Q-curvature and the pure-Weyl quadratic contraction of the 5D Weyl tensor ‘descends’ to the analog contraction of the 4D Weyl tensor. In all, the holographic Weyl anomaly one reads off is simply given by $$\\begin{aligned}
{\\mathcal A}_4[P_{2k}]=& -4 \\left(\\frac{k^3}{144}-\\frac{k^5}{240}\\right)\\,{\\mathcal Q}_4 + \\frac{k}{180}\\,W^2\\end{aligned}$$\\
in perfect and remarkable agreement with the boundary computation (eqn.\\[boundary-4D\\]).\\
Holographic derivation from 7 to 6.
-----------------------------------
\\
We move on now to seven dimensions. The WKB-exact heat expansion in $AdS_7$ [@Camporesi90; @Grigorian98; @Gopakumar:2011qs] requires factorization of the exponential factor $e^{-9t}$ so that only the first three terms remain in seven dimensions
$$\\begin{aligned}
\\mbox{massive scalar $m_k^2=k^2-9$: \\qquad tr}\\,e^{\\{\\hat{\\nabla}^{2}-k^2 +9\\}t}\\bigg{|}_{_{AdS_7}}\\,=\\, \\frac{1+2t+\\frac{16}{15}t^2 }{(4\\pi t)^{5/2}}~e^{-k^2t}\\end{aligned}$$
\\
To depart from $AdS_7$ and the conformally flat class of bulk and boundary metrics, we consider the pure-Weyl content of the heat kernel on the bulk Poincaré-Einstein metric. The first nontrivial contribution arises again with $\\hat{b}_4$
$$\\begin{aligned}
\\hat{b}_4&\\sim \\frac{1}{180} \\,\\hat{W}^2\\end{aligned}$$
The next contribution comes form the next heat coefficient $\\hat{b}_6$
$$\\begin{aligned}
&\\hat{b}_6\\sim &\\frac{1}{7!}\\left(-3|\\hat{\\nabla}\\hat{R}iem|^2+\\frac{44}{9}\\hat{R}iem ^3 - \\frac{80}{9}\\hat{R}iem'^3-\\frac{16}{3}\\hat{R}ic\\hat{R}iem^2\\right. \\\\\\nonumber
\\\\
\\nonumber
&&\\left.+\\frac{14}{3}\\hat{R}\\hat{R}iem^2-\\frac{8}{3}\\hat{R}iem\\hat{R}ic^2+\\frac{8}{9}\\hat{R}ic^3-\\frac{14}{3}\\hat{R}\\hat{R}ic^2+\\frac{35}{9}\\hat{R}^3\\right)\\end{aligned}$$
\\
The heat coefficients for the scalar Laplacian are universal in the sense that the number in front of each curvature invariant is independent of the dimensionality of the manifold. However, when following the prescription of [@Bugini:2016nvn] and going to the particular basis of Weyl invariants (see table below) given by two independent cubic contractions, $\\hat{W}^3$ and $\\hat{W}'^3$, and the third one given now by the 7D Fefferman-Graham invariant $\\hat{\\Phi}_7=|\\nabla \\hat{W}|^2-8\\hat{W}^2$, we obtain a different result
$$\\begin{aligned}
&\\hat{b}_6\\sim &\\frac{1}{7!}\\left(-\\, \\frac{1916}{9}\\,\\hat{W}'^3 +\\, \\frac{503}{9}\\,\\hat{W}^3-\\, 54\\,\\hat{\\Phi}_7\\right)\\end{aligned}$$
\\
$$\\begin{array}{|r c| c| c| c| c|} \\hline
& \\mbox{Curvature invariant } & \\hat{W}'^{\\,3} & \\hat{W}^{3} & \\hat{\\Phi}_7 \\\\
\\hline \\widehat{A}_{10}\\quad\\vline & \\widehat{R}^{\\,3} & -&- &- \\\\
\\hline \\widehat{A}_{11}\\quad\\vline & \\widehat{R}\\widehat{R}ic^{\\,2} & -&- &- \\\\
\\hline \\widehat{A}_{12}\\quad\\vline & \\widehat{R}\\widehat{R}iem^{\\,2} & -42 & 21/2& -21/2\\\\
\\hline \\widehat{A}_{13}\\quad\\vline & \\widehat{R}ic^{\\,3} & -& -& -\\\\
\\hline \\widehat{A}_{14}\\quad\\vline & \\widehat{R}iem \\, \\widehat{R}ic^{\\,2} & -&- &- \\\\
\\hline \\widehat{A}_{15}\\quad\\vline & \\widehat{R}ic \\, \\widehat{R}iem^{\\,2} & -6 & 3/2& -3/2\\\\
\\hline \\widehat{A}_{16}\\quad\\vline & \\widehat{R}iem^{\\,3} & -6 & 5/2&-3/2 \\\\
\\hline \\widehat{A}_{17}\\quad\\vline & -\\widehat{R}iem'^{\\,3} & 1/2 & -3/8 & 3/8 \\\\
\\hline \\widehat{A}_{5}\\quad\\vline & |\\hat{\\nabla}\\widehat{R}iem|^{2} &8 &-2 &3 \\\\
\\hline
\\end{array}$$\\
We now assume WKB-exactness after factorization of the $e^{-9t}$ factor. The convolution with the exponential must absorb a $-1/20 \\hat{W}^2$ contribution to $\\hat{b}_6$, that in the 7D case can be rewritten in the Weyl basis $\\left[\\hat{W}'^3,\\hat{W}^3,\\hat{\\Phi}_7\\right]$. In fact, modulo a trivial total derivative $\\hat{W}^2=\\hat{W}'^3-\\frac{1}{4}\\hat{W}^3+\\frac{1}{4}\\hat{\\Phi}_7$ on the Poincaré-Einstein metric. So that we obtain, under the assumption of WKB-exactness, the following one-loop effective Lagrangian
$$\\begin{aligned}
\\int_{0}^{\\infty}\\frac{dt}{t^{9/2}}e^{-k^2t}\\left\\{1 + 2t + \\frac{16}{15}t^2 + \\frac{1}{180}\\hat{W}^2 t^2 \\right.\\\\
\\nonumber\\\\
\\nonumber
\\left.+\\frac{1}{7!}\\left(\\frac{352}{9}\\,\\hat{W}'^3 -\\, \\frac{64}{9}\\,\\hat{W}^3 +\\,9\\,\\hat{\\Phi}_7\\right)t^3+ ...\\right\\}\\end{aligned}$$
\\
where again the ellipsis stands for higher-curvature terms in the Weyl tensor that do not contribute to the 6D holographic Weyl anomaly. After proper time integration we obtain for the one-loop effective Lagrangian (modulo an overall normalization factor)
$$\\begin{aligned}
\\mathcal{L}^{^{(\\text{GJMS})}}_{\\text{1-loop}}=\\,& \\frac{8}{315} \\left(-3k^7+21k^5-28k^3\\right)\\cdot\\hat{1} \\\\
\\nonumber\\\\
\\nonumber
&-\\frac{14k^3}{3\\cdot 7!}\\cdot\\left(4\\,\\hat{W}'^3-\\,\\hat{W}^3+\\,\\hat{\\Phi}_7\\right)+\\frac{k}{9\\cdot7!}\\cdot\\left(352\\hat{W}'^3-64\\hat{W}^3+81\\hat{\\Phi}_7\\right)+...\\end{aligned}$$
\\
Now, according to the holographic recipe [@Bugini:2016nvn], the holographic Weyl anomaly one reads off from the bulk effective Lagrangian is simply
$$\\begin{aligned}
7! \\; {\\mathcal A}_6[P_{2k}]=& -48\\,\\frac{-3k^7+21k^5-28k^3}{144}\\,{\\mathcal Q}_6 \\\\
\\nonumber\\\\
\\nonumber
&-\\frac{14k^3}{3}\\left(4I_{1}-I_{2}+\\Phi_{6}\\right)+\\frac{k}{9}\\left(352I_{1}-64I_2+81\\Phi_{6}\\right)\\end{aligned}$$
\\
We finally go to the standard basis of 6D Weyl invariants $\\left[I_1, I_2, I_3\\right]$ by use of the dictionary $3\\Phi_{6}=I_3-16I_1+4I_2$
$$\\begin{aligned}
\\qquad
7! \\; {\\mathcal A}_6[P_{2k}]&=& - 48\\,\\frac{-3k^7+21k^5-28k^3}{144}\\,{\\mathcal Q}_6\\nonumber\\\\\\nonumber\\\\
&&+\\frac{14k^3}{9}\\left(4I_{1}-I_{2}-I_{3}\\right)- \\frac{k}{9}\\left(80I_1-44I_2-27I_{3}\\right)\\nonumber\\end{aligned}$$
\\
and get perfect agreement with the outcome of the boundary computation (eqn. \\[boundary-6D\\]).\\
\\
\\
Conclusion
==========
\\
We have shown the way one bulk Laplacian rules the whole family of boundary GJMS operators and, in particular, the way the conformal anomaly is encoded in the bulk heat kernel. Clearly, the alleged WKB-exactness of the bulk scalar heat kernel on the Poincaré-Einstein metric deserves further analysis and an independent confirmation thereof would be desirable. The boundary computation of the anomaly was facilitated by the factorization of the GJMS operator on a generic Einstein manifold and by the fact that the Einstein condition, besides the many simplifications, does not spoil the independence of the curvature invariants that enter the type-A and type-B Weyl anomaly.
It would be interesting to explore the connection between the one-loop information encoded in the present holographic formula and one-loop Witten diagrams (see e.g. [@Giombi:2017hpr]). For example, one- and two-point correlators of the boundary stress tensor computed from graphs with one and two graviton legs, respectively, with the bulk scalar running in the loop ought to render the $a$ and the $c_T$ central charges[^10].
One subtle feature of the preset computation that we leave as a future direction to look into consists in the following. There is an ambiguity in the construction of GJMS operators given by the addition of terms containing the Weyl tensor. For example, one can add to the Paneitz $P_4$ operator a constant times $W^2$ without changing its conformal properties. In the case of $P_6$ in 6D, besides any of the three Weyl invariants $I_1$, $I_2$ and $I_3$, there is also the freedom to add another term quadratic in the Weyl tensor and in covariant derivatives (see e.g. [@Osborn:2015rna; @Rajagopal:2015lpa]). These additional Weyl terms will certainly modify the conformal anomaly of the differential operators. The choice implied by the factorization on Einstein manifolds that we have made use of clearly distinguishes pure-Ricci GJMS with no additional term containing the Weyl tensor. In remains then to be elucidated the way in which the possible additional Weyl terms find their way into the holographic picture.\\
\\
Acknowledgement {#acknowledgement .unnumbered}
===============
\\
We are grateful to S.Acevedo, R.Aros, H.Dorn, S.Dowker, R.Olea, A.Torrielli and A.Tseytlin for valuable discussions. The work of F.B. was partially funded by grant CONICYT-PCHA/Doctorado Nacional/2014-21140283. D.E.D. acknowledges support from project UNAB DI 14-18/REG and is also grateful to the Galileo Galilei Institute for Theoretical Physics (GGI) for the hospitality and INFN for partial support during the stay at the program “New Developments in AdS3/CFT2 Holography” and to the Quantum Field and String Theory Group at Humboldt University of Berlin for the kind invitation and the opportunity to present the results reported here.
WKB-exactness of the scalar Laplacian {#app.A}
=====================================
\\
In this appendix, we explicitly compute the first heat coefficients and illustrate the way they get rearranged after factorization of the exponential factor.\\
#### **5D PE/E**
$$\\begin{aligned}
\\mbox{tr}\\,e^{\\{\\hat{\\nabla}^{2}\\}t}\\bigg{|}_{_{PE}}\\,=&\\, \\frac{1}{(4\\pi t)^{5/2}}\\left\\{ \\,1\\,-\\,\\frac{10}{3} \\,t\\,
+\\,\\frac{16}{3} \\,t^2\\,+\\,\\frac{1}{180} \\,\\hat{W}^2\\,t^2 \\right.
\\\\
\\nonumber
\\\\
\\nonumber &\\left. -\\,\\frac{16}{3} \\,t^3\\,-\\,\\frac{1}{45} \\,\\hat{W}^2\\,t^3\\,- \\frac{1}{7!}\\left(\\, \\frac{80}{9}\\,\\hat{W}'^3 -\\, \\frac{44}{9}\\,\\hat{W}^3+\\,3\\,\\hat{\\Phi}_5\\right)\\,t^3\\,+\\mathcal{O}(t^4)\\,\\right\\}\\\\
\\nonumber
\\\\
\\nonumber
\\,=&\\, \\frac{e^{-4t}}{(4\\pi t)^{5/2}}\\left\\{ \\,1\\,+\\,\\frac{2}{3} \\,t\\, +\\,\\frac{1}{180} \\,\\hat{W}^2\\,t^2\\, \\right.
\\\\
\\nonumber
\\\\
\\nonumber &\\left. -
\\frac{1}{7!}\\left(\\, \\frac{80}{9}\\,\\hat{W}'^3 -\\, \\frac{44}{9}\\,\\hat{W}^3+\\,3\\,\\hat{\\Phi}_5\\right)\\,t^3\\,+\\mathcal{O}(t^4)\\,\\right\\}\\end{aligned}$$
\\
#### **7D PE/E**
$$\\begin{aligned}
\\mbox{tr}\\,e^{\\{\\hat{\\nabla}^{2}\\}t}\\bigg{|}_{_{PE}}\\,=&\\, \\frac{1}{(4\\pi t)^{7/2}}\\left\\{ \\,1\\,-\\,7\\,t\\,
+\\,\\frac{707}{30}\\,t^2\\,+\\,\\frac{1}{180}\\,\\hat{W}^2\\,t^2 \\right.
\\\\
\\nonumber
\\\\
\\nonumber &\\left. -\\,\\frac{501}{10}\\,t^3\\,-\\,\\frac{1}{7!}\\left(\\, \\frac{1916}{9}\\,\\hat{W}'^3 -\\, \\frac{503}{9}\\,\\hat{W}^3+\\,54\\,\\hat{\\Phi}_7\\right)\\,t^3\\,+\\mathcal{O}(t^4)\\,\\right\\}\\\\
\\nonumber
\\\\
\\nonumber
\\\\
\\nonumber
\\,=&\\, \\frac{1}{(4\\pi t)^{7/2}}\\left\\{ \\,1\\,-\\,7\\,t\\,
+\\,\\frac{707}{30}\\,t^2\\,+\\,\\frac{1}{180}\\,\\hat{W}^2\\,t^2 \\right.
\\\\
\\nonumber
\\\\
\\nonumber &\\left. -\\,\\frac{501}{10}\\,t^3\\,-\\,\\frac{1}{20}\\,\\hat{W}^2\\,t^3\\,+ \\frac{1}{7!}\\left(\\, \\frac{352}{9}\\,\\hat{W}'^3 -\\, \\frac{64}{9}\\,\\hat{W}^3+\\,9\\,\\hat{\\Phi}_7\\right)\\,t^3\\,+\\mathcal{O}(t^4)\\,\\right\\}\\\\
\\nonumber
\\\\
\\nonumber
\\\\
\\nonumber
\\,=&\\, \\frac{e^{-9t}}{(4\\pi t)^{7/2}}\\left\\{ \\,1\\,+\\,2\\,t\\,+\\,\\frac{16}{15}\\,t^2\\, +\\,\\frac{1}{180} \\,\\hat{W}^2\\,t^2\\, \\right.
\\\\
\\nonumber
\\\\
\\nonumber &\\left. +
\\frac{1}{7!}\\left(\\, \\frac{352}{9}\\,\\hat{W}'^3 -\\, \\frac{64}{9}\\,\\hat{W}^3+\\,9\\,\\hat{\\Phi}_7\\right)\\,t^3\\,+\\mathcal{O}(t^4)\\,\\right\\}\\end{aligned}$$
\\
\\[2\\][\\#2]{}
[10]{}
C. R. Graham, R. Jenne, L. J. Mason and G. A. J. Sparling, [*Conformally invariant powers of the Laplacian, I: Existence*]{}, J. Lond. Math. Soc. [**46**]{}(1992), 557.
C. Fefferman and C. R. Graham, [*Conformal invariants*]{}, in [*The Mathematical Heritage of Élie Cartan (Lyon, 1984)*]{}, Astérisque, 1985, Numero Hors Serie, 95-116.
A. Juhl, “Explicit formulas for GJMS-operators and Q-curvatures,” Geom. Funct. Analysis [**23**]{} (2013) No. 4, 1278-1370 \\[arXiv:1108.0273\\[math.DG\\]\\].
C. Fefferman and C. R. Graham, “Juhl’s formulae for GJMS operators and Q-curvatures,” J. Amer. Math. Soc. [**26**]{} (2013), 1191-1207 \\[arXiv:1203.0360\\[math.DG\\]\\].
T. Branson, “The Functional Determinant,” Global AnalysisResearch Center Lecture Note Series, Number 4, Seoul NationalUniversity (1993)
T. Branson, “Sharp inequalities, the functional determinant, and the complementary series,” Trans. Amer. Math. Soc. [**347**]{} (1995) 3671.
J. M. Maldacena, “The large N limit of superconformal field theories and supergravity,” Adv. Theor. Math. Phys. [**2**]{}, 231 (1998) \\[Int. J. Theor. Phys. [**38**]{}, 1113 (1999)\\] \\[arXiv:hep-th/9711200\\];
S. S. Gubser, I. R. Klebanov and A. M. Polyakov, “Gauge theory correlators from non-critical string theory,” Phys. Lett. B [**428**]{}, 105 (1998) \\[arXiv:hep-th/9802109\\];
E. Witten, “Anti-de Sitter space and holography,” Adv. Theor.Math. Phys. [**2**]{} (1998) 253 \\[arXiv:hep-th/9802150\\].
C. R. Graham and M. Zworski, “Scattering matrix in conformal geometry,” Invent. Math. [**152**]{} (2003) 89 \\[arXiv:math-DG/0109089\\].
M. T. Anderson, “$L^2$ curvature and volume renormalization of AHE metrics on 4-manifolds,” Math. Res. Lett. [**8**]{} (2001) no. 1-2, 171-188.
P. Albin, “Renormalizing Curvature Integrals on Poincare-Einstein Manifolds,” Adv. Math. [**221**]{} (2009) no.1, 140 \\[math/0504161 \\[math.DG\\]\\].
A. Chang, J. Qing and P. Yang, “On the renormalized volumes for conformally compact Einstein manifolds,” J. Math. Sci. [**149**]{} (2008) 1755 \\[math/0512376 \\[math.DG\\]\\].
M. Henningson and K. Skenderis, “The holographic Weyl anomaly,” JHEP [**9807**]{} (1998) 023 \\[arXiv:hep-th/9806087\\].
M. Henningson and K. Skenderis, “Holography and the Weyl anomaly,” Fortsch. Phys. [**48**]{} (2000) 125 \\[arXiv:hep-th/9812032\\].
S. Deser and A. Schwimmer, “Geometric classification of conformal anomalies in arbitrary dimensions,” Phys. Lett. B [**309**]{}, 279 (1993) \\[hep-th/9302047\\].
C. R. Graham, “Volume and area renormalizations for conformally compact Einstein metrics,” Rend. Circ. Mat. Palermo (2) Suppl. No. 63 (2000) 31 \\[arXiv:math.DG/9909042\\].
T. Branson and B. Oersted, “Explicit functional determinants in four dimensions,” Proc. Amer. Math. Soc. [**113**]{} (1991) 669.
A. M. Polyakov, “Quantum Geometry of Bosonic Strings,” Phys. Lett. B [**103**]{} (1981) 207 \\[Phys. Lett. [**103B**]{} (1981) 207\\]. S. S. Gubser and I. Mitra, “Double trace operators and one loop vacuum energy in AdS / CFT,” Phys. Rev. D [**67**]{} (2003) 064018 \\[hep-th/0210093\\].
S. S. Gubser and I. R. Klebanov, “A Universal result on central charges in the presence of double trace deformations,” Nucl. Phys. B [**656**]{} (2003) 23 \\[hep-th/0212138\\].
T. Hartman and L. Rastelli, “Double-trace deformations, mixed boundary conditions and functional determinants in AdS/CFT,” JHEP [**0801**]{} (2008) 019 \\[hep-th/0602106\\].
D. E. Diaz and H. Dorn, “Partition functions and double-trace deformations in AdS/CFT,” JHEP [**0705**]{} (2007) 046 \\[hep-th/0702163 \\[HEP-TH\\]\\].
D. E. Diaz, “Holographic formula for the determinant of the scattering operator in thermal AdS,” J. Phys. A [**42**]{} (2009) 365401 \\[arXiv:0812.2158 \\[hep-th\\]\\].
R. Aros and D. E. Diaz, “Functional determinants, generalized BTZ geometries and Selberg zeta function,” J. Phys. A [**43**]{} (2010) 205402 \\[arXiv:0910.0029 \\[gr-qc\\]\\].
R. Aros and D. E. Diaz, “Determinant and Weyl anomaly of Dirac operator: a holographic derivation,” J. Phys. A [**45**]{} (2012) 125401 \\[arXiv:1111.1463 \\[math-ph\\]\\].
J. S. Dowker, “Spherical Dirac GJMS operator determinants,” J. Phys. A [**48**]{} (2015) no.2, 025401 \\[arXiv:1310.5563 \\[hep-th\\]\\].
S. Giombi, I. R. Klebanov, S. S. Pufu, B. R. Safdi and G. Tarnopolsky, “AdS Description of Induced Higher-Spin Gauge Theory,” JHEP [**1310**]{} (2013) 016 \\[arXiv:1306.5242 \\[hep-th\\]\\].
S. Giombi and I. R. Klebanov, “One Loop Tests of Higher Spin AdS/CFT,” JHEP [**1312**]{} (2013) 068 \\[arXiv:1308.2337 \\[hep-th\\]\\].
A. A. Tseytlin, “On partition function and Weyl anomaly of conformal higher spin fields,” Nucl. Phys. B [**877**]{}, 598 (2013) \\[arXiv:1309.0785 \\[hep-th\\]\\].
A. A. Tseytlin, “Weyl anomaly of conformal higher spins on six-sphere,” Nucl. Phys. B [**877**]{}, 632 (2013) \\[arXiv:1310.1795 \\[hep-th\\]\\].
S. Giombi, I. R. Klebanov and B. R. Safdi, “Higher Spin AdS$_{d+1}$/CFT$_d$ at One Loop,” Phys. Rev. D [**89**]{} (2014) no.8, 084004 \\[arXiv:1401.0825 \\[hep-th\\]\\].
M. Beccaria, X. Bekaert and A. A. Tseytlin, “Partition function of free conformal higher spin theory,” JHEP [**1408**]{} (2014) 113 \\[arXiv:1406.3542 \\[hep-th\\]\\].
R. Aros, F. Bugini and D. E. Diaz, “On Renyi entropy for free conformal fields: holographic and q-analog recipes,” J. Phys. A [**48**]{} (2015) 105401 \\[arXiv:1408.1931 \\[hep-th\\]\\].
M. Beccaria and A. A. Tseytlin, “Higher spins in AdS$_{5}$ at one loop: vacuum energy, boundary conformal anomalies and AdS/CFT,” JHEP [**1411**]{} (2014) 114 \\[arXiv:1410.3273 \\[hep-th\\]\\].
M. Beccaria, G. Macorini and A. A. Tseytlin, “Supergravity one-loop corrections on AdS$_7$ and AdS$_3$, higher spins and AdS/CFT,” Nucl. Phys. B [**892**]{} (2015) 211 \\[arXiv:1412.0489 \\[hep-th\\]\\].
M. Beccaria and A. A. Tseytlin, “On higher spin partition functions,” J. Phys. A [**48**]{} (2015) no.27, 275401 \\[arXiv:1503.08143 \\[hep-th\\]\\].
M. Beccaria and A. A. Tseytlin, “Conformal a-anomaly of some non-unitary 6d superconformal theories,” JHEP [**1509**]{} (2015) 017 \\[arXiv:1506.08727 \\[hep-th\\]\\].
A. O. Barvinsky and D. V. Nesterov, “Quantum effective action in spacetimes with branes and boundaries,” Phys. Rev. D [**73**]{} (2006) 066012 \\[hep-th/0512291\\].
A. O. Barvinsky, “Holography beyond conformal invariance and AdS isometry?,” J. Exp. Theor. Phys. [**120**]{} (2015) no.3, 449 \\[arXiv:1410.6316 \\[hep-th\\]\\].
A. O. Barvinsky, “Extended Holography: Double-Trace Deformation and Brane-Induced Gravity Models,” Russ. Phys. J. [**59**]{} (2017) no.11, 1788. C. Guillarmou, “Generalized Krein formula, determinants and Selberg zeta function in even dimension,” American Journal of Math. 131 (2009), no 5. \\[Arxiv math.SP/0512173\\] .
D. E. Diaz, “Polyakov formulas for GJMS operators from AdS/CFT,” JHEP [**0807**]{} (2008) 103 \\[arXiv:0803.0571 \\[hep-th\\]\\].
J. S. Dowker, “Determinants and conformal anomalies of GJMS operators on spheres,” J. Phys. A [**44**]{} (2011) 115402 \\[arXiv:1010.0566 \\[hep-th\\]\\].
C. Imbimbo, A. Schwimmer, S. Theisen and S. Yankielowicz, “Diffeomorphisms and holographic anomalies,” Class. Quant. Grav. [**17**]{} (2000) 1129 \\[arXiv:hep-th/9910267\\].
R. Camporesi, “Harmonic analysis and propagators on homogeneous spaces,” Phys. Rept. [**196**]{} (1990) 1.
A. Grigor’yan and M. Noguchi “The heat kernel on hyperbolic space,” Bulletin of LMS, [**30**]{} (1998) 643-650.
R. Gopakumar, R. K. Gupta and S. Lal, “The Heat Kernel on $AdS$,” JHEP [**1111**]{} (2011) 010 \\[arXiv:1103.3627 \\[hep-th\\]\\].
A. R. Gover “ Laplacian Operators and Q-curvature on Conformally Einstein Manifolds,” Math. Ann. (2006) 336: 311 https://doi.org/10.1007/s00208-006-0004-z \\[arXiv:math/0506037 \\[math.DG\\]\\].
F. Bugini and D. E. Diaz, “Simple recipe for holographic Weyl anomaly,” JHEP [**1704**]{} (2017) 122 \\[arXiv:1612.00351 \\[hep-th\\]\\].
M. Beccaria and A. A. Tseytlin, “C$_{T}$ for higher derivative conformal fields and anomalies of (1, 0) superconformal 6d theories,” JHEP [**1706**]{} (2017) 002 \\[arXiv:1705.00305 \\[hep-th\\]\\].
S. Acevedo, R. Aros, F. Bugini and D. E. Díaz, “On the Weyl anomaly of 4D Conformal Higher Spins: a holographic approach,” JHEP [**1711**]{} (2017) 082 \\[arXiv:1710.03779 \\[hep-th\\]\\].
F. Bastianelli, S. Frolov and A. A. Tseytlin, “Conformal anomaly of (2,0) tensor multiplet in six dimensions and AdS/CFT correspondence,” JHEP [**0002**]{} (2000) 013 \\[arXiv:hep-th/0001041\\].
A. A. Tseytlin, “On partition function and Weyl anomaly of conformal higher spin fields,” Nucl. Phys. B [**877**]{}, 598 (2013) \\[arXiv:1309.0785 \\[hep-th\\]\\].
H. Osborn and A. Stergiou, “Structures on the Conformal Manifold in Six Dimensional Theories,” JHEP [**1504**]{} (2015) 157 \\[arXiv:1501.01308 \\[hep-th\\]\\].
P. Mansfield and D. Nolland, “One loop conformal anomalies from AdS / CFT in the Schrodinger representation,” JHEP [**9907**]{} (1999) 028 \\[hep-th/9906054\\].
P. Mansfield, D. Nolland and T. Ueno, “The Boundary Weyl anomaly in the N=4 SYM / type IIB supergravity correspondence,” JHEP [**0401**]{} (2004) 013 \\[hep-th/0311021\\].
P. Mansfield, D. Nolland and T. Ueno, “Order 1 / N\\*\\*3 corrections to the conformal anomaly of the (2,0) theory in six-dimensions,” Phys. Lett. B [**566**]{} (2003) 157 \\[hep-th/0305015\\].
J. T. Liu and B. McPeak, “One-Loop Holographic Weyl Anomaly in Six Dimensions,” JHEP [**1801**]{} (2018) 149 \\[arXiv:1709.02819 \\[hep-th\\]\\].
M. Kulaxizi and A. Parnachev, “Supersymmetry Constraints in Holographic Gravities,” Phys. Rev. D [**82**]{} (2010) 066001 \\[arXiv:0912.4244 \\[hep-th\\]\\].
R. X. Miao, “A Note on Holographic Weyl Anomaly and Entanglement Entropy,” Class. Quant. Grav. [**31**]{} (2014) 065009 \\[arXiv:1309.0211 \\[hep-th\\]\\].
M. Beccaria and A. A. Tseytlin, “Conformal anomaly c-coefficients of superconformal 6d theories,” JHEP [**1601**]{} (2016) 001 \\[arXiv:1510.02685 \\[hep-th\\]\\].
S. Giombi, C. Sleight and M. Taronna, “Spinning AdS Loop Diagrams: Two Point Functions,” JHEP [**1806**]{} (2018) 030 \\[arXiv:1708.08404 \\[hep-th\\]\\].
H. Osborn and A. Stergiou, “Structures on the Conformal Manifold in Six Dimensional Theories,” JHEP [**1504**]{} (2015) 157 \\[arXiv:1501.01308 \\[hep-th\\]\\].
S. Rajagopal, A. Stergiou and Y. Zhu, “Holographic Trace Anomaly and Local Renormalization Group,” JHEP [**1511**]{} (2015) 216 \\[arXiv:1508.01210 \\[hep-th\\]\\].
[^1]: For recent results on recursive relations and explicit construction of GJMS operators and the associated Q-curvatures, we refer to the works [@Juhl11; @FG13] and references therein.
[^2]: The AdS/CFT correspondence certainly predicted the matching of the volume anomaly with the combined conformal anomalies for the free scalars, spinors, and 1-form that enter the four-dimensional vector multiplet of $\\mathcal{N}=4$ $SU(N)$ supersymmetric Yang-Mills theory at leading large $N$, as confirmed in [@HS98; @HS98-1]. But this connection is somewhat indirect, it relies on non-renormalization theorems of the supersymmetric boundary CFT. In fact, in six dimensions the matching for the free superconformal $\\mathcal{N}=(2,0)$ tensor multiplet is only achieved for the type-B content [@Deser:1993yx] of the Q-curvature, the type-A central charge $a$ is not protected by the supersymmetry so that the combined anomalies do not add up to reproduce the Q-curvature.
[^3]: Further extensions of the holographic formula to fields other than the scalar and to quotients of AdS have been studied ever since [@Diaz:2008iv]-[@Barvinsky:2017qvf].
[^4]: Quotients of AdS, like thermal AdS for example, allow explicit results in terms of Patterson-Selberg zeta functions. In odd dimensions, these examples were also reported in the conformal geometry literature [@Guillarmou05].
[^5]: This holographically derived formula for the central charge $a$ was verified later on by using the more standard zeta function regularization combined with Branson’s factorization of GJMS operators on the round spheres [@Dowker:2010qy].
[^6]: This is a slightly more efficient way than the usual trick (see, e.g. [@Tseytlin:2013jya]) that restricts first to the round sphere for computing $a$ and then to a Ricci-flat manifold for computing $c-a$.
[^7]: For notation and conventions we refer to [@Bugini:2016nvn].
[^8]: From now on we denote bulk quantities with a hat to distinguish from the corresponding boundary ones.
[^9]: The merit of our special basis of curvature invariants is to unveil the direct relation between bulk and boundary Weyl invariants, but of course the contribution of each term of the A-basis has been worked out by other routes in the literature, see e.g. [@Kulaxizi:2009pz; @Miao:2013nfa; @Beccaria:2015ypa] and references therein.
[^10]: The coefficient of the two-point function of the stress tensor $c_T$ in 4D is proportional to the $c$ central charge and in 6D, to $c_3$. In 6D one would need additional (three-point) correlators to disentangle the remaining ($c_1$ and $c_2$) type-B Weyl anomaly coefficients.
|
Lawns
Sometimes laying a new lawn is less work that trying to rejuvenate a tired, heavily worn old lawn. It’s also a good opportunity to add more drainage and enrich the soil beneath. We use only the best quality turf, specialist pre-turfing mixes and slow-release fertilisers. We’re also experienced in installing artificial lawns (see bottom of page) which can sometimes be the best option in either a small, shady space, where drainage is poor, or where a very low level of maintenance is desired. |
Chanderi
Chanderi, is a town of historical importance in Ashoknagar District of the state Madhya Pradesh in India. It is situated at a distance of 127 km from Shivpuri, 37 km from Lalitpur, 55 km from Ashok Nagar and about 45 km from Isagarh. It is surrounded by hills southwest of the Betwa River. Chanderi is surrounded by hills, lakes and forests and is spotted with several monuments of the Bundela Rajputs and Malwa sultans. It is famous for ancient Jain Temples.
Its population in 2011 was 33,081.
History
Chanderi is a block in Ashok Nagar District. Chanderi is located strategically on the borders of Malwa and Bundelkhand. History of Chanderi goes back to the 11th century, when it was dominated by the trade routes of Central India and was proximate to the arterial route to the ancient ports of Gujarat as well as to Malwa, Mewar, Central India and the Deccan. Consequently, Chanderi became an important military outpost. The town also finds mention in Mahabharata. Shishupal was the king of Chanderi during the Mahabharata period.
Chanderi is mentioned by the Persian scholar Alberuni in 1030.
Ghiyas ud din Balban captured the city in 1251 for Nasiruddin Mahmud, Sultan of Delhi. Sultan Mahmud I Khilji of Malwa captured the city in 1438 after a siege of several months. In 1520 Rana Sanga of Mewar captured the city, and gave it to Medini Rai, a rebellious minister of Sultan Mahmud II of Malwa. In the Battle of Chanderi, the Mughal Emperor Babur captured the city from Medini Rai and witnessed the macabre Rajput rite of jauhar, in which, faced with certain defeat and in an attempt to escape dishonor in the hands of the enemy, women with children in their arms jumped in a fire pit to commit suicide, which was made for this specific purpose, against the background of vedic hymns recited by the priests. Jauhar was performed during the night and in the morning the men would rub the ashes of their dead women folk on their forehead, don a saffron garment known as kesariya, chew tulsi leaves (in India tulsi leaves are placed in the mouth of a dead body), symbolizing their awareness about impending death and resolve to fight and die with honour. This method of fighting & dying for the cause of retaining honour was called "SAKA".
In 1540 it was captured by Sher Shah Suri, and added to the governorship of Shujaat Khan.
The Mughal Emperor Akbar made the city a sarkar in the subah of Malwa. According to Ain-e-Akbari, the autobiography of Akbar, Chanderi had 14000 stone houses and boasted of 384 markets, 360 sapcious caravan sarais (resting place) and 12,000 mosques.
The Bundela Rajputs captured the city in 1586, and it was held by Ram Sab, a son of Raja Madhukar of Orchha. In 1680 Devi Singh Bundela was made governor of the city, and Chanderi remained in the hands of his family until it was annexed in 1811 by Jean Baptiste Filose for the Maratha ruler Daulat Rao Sindhia of Gwalior.
The city was transferred to the British in 1844.
The British lost control of the city during the Revolt of 1857, and the city was recaptured by Hugh Rose on 14 March 1858. Richard Harte Keatinge led the assault, for which he was awarded the Victoria Cross. The city was transferred back to the Sindhias of Gwalior in 1861, and became part of Isagarh District of Gwalior state.
After India's independence in 1947, Gwalior became part of the new state of Madhya Bharat, which was merged into Madhya Pradesh on 1 November 1956.
Geography
Chanderi is located at . It has an average elevation of 456 metres (1496 feet).
Demographics
India census, Chanderi had a population of 28,313. Males constitute 52% of the population and females 48%.
Places of interest
Shri Choubisi Bada Mandir
shri Parasnath digambar jain Purana Mandir Jain Temple
Shri Khandargiri Jain temple
Shri Thobon Ji Jain temple
Shri Chandraprabha Digambar Jain temple
Bawari masjid
Jami Masjid, Chanderi
kati ghati
battesi wabri
koshiq mahal
Shahzadi ka Rauza
Jageswari devi Temple
Chanderi Museum
Khandar Giri Atishay Khetra
Malan Kho
Baiju Bawra's Samadhi(Cenotaph)
Janki Nath Temple
Access
There is a good roadway network in Chanderi. The town lies at State Highway 20 with connections to Ashoknagar, ISHAGARH, Lalitpur etc.
There is no Railway line in Chanderi neither nearby. A proposed Railway line was enacted in 2014 as Pipraigaon-Chanderi-Lalitpur line of Northern Railways, Which will be in progress soon.
One can easily visit Chanderi via
Lalitpur 40 km from Chanderi. Well connected by road Lalitpur is situated on the Bhopal-Jhansi railway route and is also a stop for a number of important trains.
Mungaoli 38 km from Chanderi. Mungaoli is well connected by road and is situated on the Bina-Kota railway route and express trains from Bina Etawa and Kota halt at Mungaoli.
Ashok Nagar 65 km from Chanderi. Situated on the Bina-Kota railway route. Many passenger trains and few express like the Ahemdabad-Varanasi Express (Sabarmati Exp), Jabalpur-Jaipur-Ajmer Express (Dayodaya Superfast Exp), Okha-Gorakhpur Express, Santragachi-Ajmer Express, Ujjain-Dehradun Express (Ujjaini Exp), Puri-Bikaner Express, Durg-Jaipur Express, Tambaram (Chennai) - Bhagat Ki Kothi (Jodhpur) Express, Durg-Ajmer Express, Surat-Muzaffarpur Express, Ajmer-Kolkata Express, Bhopal-Gwalior Express, Indore-Jabalpur Express halt at the Ashok Nagar station.
Jainism at Chanderi
The Chanderi area has been a major center of Jain culture. It was a major center of the Parwar Jain community. There are a number of Jain places nearby- Gurilagiri (7 km), Aamanachar (29 km), Bithala (19 km), Bhamon (16 km), Khandargiri (2 km), Thuvanji (22 km) and Bhiyadant (14 km), and Deogarh, Uttar Pradesh (20 km, across the border).
At a distance of 19 km from present Chanderi town is situated the Buddhi (old) Chanderi. Buddhi Chanderi is situated on the banks of Urvashi river. It is believed that the Chaidnagar mentioned in Puranas is same as Buddhi Chanderi. There is a myth that when Raja Nala left Damayanti asleep in the forests of Narwar she moved through dense forests and reached Chaidnagar protecting herself from wild animals. The route through forests from Narwar to Chanderi is very short. A number of 9th and 10th century Jain temples are there in Buddhi Chanderi attracting thousands of Jain pilgrims from all over the country.
The Jain Bhattarakas of Mula Sangh, Balatkara Gana had a center at Chanderi that flourished for several centuries. The lineage, as constructed by Pt. Phulachandra Shastri is as following:
Devendrakirti (see Balatkara Gana), who awarded Singhai title in 1436 CE (see Parwar (Jain))
Tribhuvanakirti (anointed in Vikram Samvat 1522),
Sasasrakirti
Padmanandi
Yashahkirti
Lalitkirti
Dharmakirt
Padmakirti (died Vikram Samvat 1717)
Sakalakirti
Surendrakirti (pratishtha in Vikram Samvat 1746)
A branch of this lineage continued at Sironj.
Jagatkirti (pupil of Dharmakirti above)
Tribhuvanakirti
Narendrakirti
Unknown
Rajkirti
Devendrakirti (pratishtha in samvat 1871)
Jain Temple
List of Jain temples at Chanderi:
Shri Choubeesee Bara Mandir : This temple has 2 parts with front part is known as Bara mandir and back part called Choubeesee mandir. As suggested by inscription this temple was built around year 1293(V.S. 1350). This temple was renovated in 13th to 18th century. This temple has 24 idols for 24 Tirthankars and these idols are made by the stones of actual colors as the Tirthankar. All idols are same in dimensions, which is very difficult in real.
Shri Parasnath Digamber Jain Purana mandir Jain temple : It is one of the oldest jain temple in chanderi containing idols of Shri Prasnath ji of 7th century.
Shri Khandargiri Jain temple : It is one of the most famous religious site in Chanderi. This temple is famous for 45 feet carved idol of Rishabhnatha. Inscriptions suggest that this statue is over 700 years old. Six caves have been cut out of the hillside. Inside there are a number of religious carvings of Jain saints and decorations carved into the existing hillside. The oldest cave is cave 6 that dates back to 1236.
Shri thobonji Jain temple : This temple belongs to 9th century. Moolnayak of this temple is light blue colored idol colossal of Adinath of height 36 feet 8 inches. The other colossal idols in this temple are Bhagwan ParshVanatha of height 13 feet 4 inches and Bhagwan Parshwanath of height 12 feet 6 inches.
Shri Chandraprabha digambar Jain temple : This temple is dedicated to Chandraprabha, the 8th tirthankar of Jainism. Oldest inscription date back to year 967 AD.
In popular culture
Stree – The 2018 horror-comedy film Stree, about a witch who abducts men at night, is set and shot in the town of Chanderi.
Sui Dhaaga – Some parts of this Anushka Sharma and Varun Dhawan starrer film were shot in Chanderi.
Gudiya Humari Sabhi Pe Bhari serial shot in Chanderi, starting with Chanderi Bus stand whose named Lalitpur Bus Adda and background of Kila Kothi.
In the episode of 1st, 2nd and 3rd January 2020 respectively, they telecast Chanderi's places which filmed in Chanderi's palace, Jageshwari mandir, Laxman mandir, panbeshwar talab and Kuku Taal(Kuku talaiya).
See also
Chanderi sari
References
Sources
Hunter, William Wilson, James Sutherland Cotton, Richard Burn, William Stevenson Meyer, eds. (1909).
Imperial Gazetteer of India, vol. 9. Oxford, Clarendon Press.
External links
Shri Digamber Jain Atishaya Kshetra Choubeesee Bara Mandir, Chanderi
Shri Digamber Jain Atishaya Kshetra Khandargiri
Shri Digamber Jain Atishaya Kshetra Thuvonji
Chanderi Geographical Index Website
Film on the master weavers of Chanderi
Chanderi: Travel Guide Goodearth Publications, New Delhi, 2006. ()
Category:Cities and towns in Ashoknagar district
Category:Jain rock-cut architecture
Category:9th-century Jain temples
Category:13th-century Jain temples |
Q:
temperature stability in current mirrors
I have read here that temperature stability is one of the performance issues in current mirrors. Can anyone what does temperature stability mean in current mirrors? Thanks.
A:
In an NPN current mirror:
\\$I_{in}\\$ is converted into a voltage by \\$Q_1\\$ so we get \\$V_{BE,Q1}\\$.
That \\$V_{BE,Q1}\\$ is applied to \\$Q_2\\$ which converts that voltage back into a current \\$I_O\\$.
This all works by the fact that the voltage <=> current relation:
(\\$I_C(V_{BE})\\$ and \\$V_{BE}(I_C)\\$)
of both transistors are absolutely identical.
Unfortunately this relation is quite temperature dependent.
That means that if \\$Q_1\\$ and \\$Q_2\\$ do not have the same temperature then their voltage <=> current relations aren't the same and that means that the currents \\$I_{in}\\$ and \\$I_O\\$ will not be identical in value.
Possible solutions are:
couple the transistors such that they will be at very similar temperatures.
Use a component where two identical transistors share the same package
use this on an IC where all transistors share the same silicon die
use emitter degeneration to make the voltage <=> current relation less dependent on the transistor:
|
Meet the ‘Meme Team’ Behind Fuckjerry’s Instagram
With over 11 million followers, a lot more that goes into the hit @Fuckjerry Instagram than some guy sitting on his smartphone. As a part of Jerry Media, a company that harnesses the millennial mindset to create viral advertising and branding, @Fuckjerry is managed by not only its founder, Elliot Tebele, 26, but a team of dedicated designers, content creators, and well, millennials. We caught up with them to unmask the men and women behind the memes.
The Chelsea galleries. I can go anytime and get a healthy dose of world class art for free, whether it’s for half an hour or the whole day. If I don’t like a gallery, I just hop next door. And the best part? I don’t tell anybody I’m going. It’s my zen alone time.
What’s the top thing people do that bothers you most?
When people I care about and love dearly send me memes I’ve already seen. ¯\\_(?)_/¯
If you could choose one place in the world to visit tomorrow, where would it be and why?
Bangkok. It’s a perfect blend of east meets west. It’s everything I love about living in a metropolitan city with the option to detach and be in a completely different culture. Also… the food.
What’s one thing you’ve always wanted to do but never had the chance to?
Get brunch on the moon or take an Uber to mars for happy hour. Meet some friends on the sun for hangsies and then hop over to saturn and go for a run on its rings. Perhaps a quick dip in a black hole to feel refreshed after a long night out. |
Platelets, and the components which they secrete upon activation, play a major role in hemostasis, thrombosis and the development of atherosclerosis (see Petersdorf, R. G., et al., eds., Principles of Internal Medicine, 10th edition, 1983, McGraw-Hill, New York, pp. 292-294 and 1468). Megakaryocytes in the bone marrow form platelets by pinching off pieces of their cytoplasm.
Upon injury to a tissue, blood platelets adhere to the exposed subendothelial tissue through adhesive platelet components. These components also promote platelet-platelet interactions and smooth muscle cell proliferation in response to platelet-derived growth factors. Platelets adhere to other membranes through their membrane protein GPIb to von Willebrand factor (a component of clotting factor VIII) in the subendothelial matrix. This is followed by platelet clot formation by interactions between GPIIb/IIIa, as well as von Willebrand factor, platelet factor 4 and fibrinogen which are secreted from the platelet alpha granules into the interstitial spaces of the clot. Thus, the efficacy of platelet participation in normal processes, as well as in atherosclerosis, is largely dependent upon adequate numbers of platelets and adequate concentrations of the participating components in the platelet membranes and granules.
Platelet production is controlled by sequential regulation of the component steps of megakaryocytopoiesis: 1) commitment of pluripotent stem cells to the megakaryocyte lineage, 2) proliferation of the committed stem cells, 3) polyploidization, 4) cytoplasmic maturation, and 5) platelet release. Greenberg-Sepersky, S. M., et al., Thrombo. Res. 24:299-306 (1981). However, the process of platelet production which occurs at the level of differentiation and maturation of the parent megakaryocytes in the bone marrow is poorly understood.
To date, several humoral factors have been postulated to regulate the steps of megakaryocytopoiesis in vivo and in vitro. In colony-forming assays, which measure the proliferation of committed stem cells, megakaryocyte colony-stimulating factor (Meg-CSF), megakaryocyte potentiator (MK-POT), interleukin-3 (IL-3), interleukin-1 (IL-1), erythropoietin (EPO), and granulocyte-macrophage colony-stimulating factor (GM-CSF), all increase the number and size of megakaryocyte colonies in vitro. Thrombocytopenic serum, a source of the uncharacterized factor "thrombopoietin," or the conditioned medium from bone marrow cultures and cultured human embryonic kidney cells increases the number, ploidy, and size of megakaryocytes in vitro, and the incorporation of radiolabelled precursors into the membrane and alpha granules of newly-released platelets in vivo.
A major limitation of most of these studies is the lack of a purified megakaryocytic cell system. All of the studies which examine megakaryocyte differentiation utilize bone marrow cell preparations. However, except for studies which utilize primary cultures of a single bone marrow cell type, interpretation of the results is complicated by the presence of non-megakaryocytic cells which may act as accessory cells.
The mechanisms controlling thrombopoiesis are not well understood due to the inability to isolate megakaryoblasts away from other bone marrow cells in the absence of accessory cells, and due to the unavailability of a cultured cell line which can serve as a model of the differentiating megakaryocyte. Megakaryocyte differentiation and maturation is characterized by increased polyploidization and enhanced expression of platelet membrane proteins such as GPIb, GPIIb/IIIa and platelet-specific alpha granule formation.
Cell lines which display the characteristics of megakaryocytic cell have been reported. However, these cell lines are limited in their ability to be used as models of megakaryocyte differentiation. For example, MEG-01 cells have been reported to be a megakaryoblastic cell line. However, MEG-01 cells contain the important platelet marker antigen, GPIb, only in the cytoplasm of a subpopulation of larger MEG-01 cells rather than uniformly expressing it on the surface of all the cells. Ogura, M., et al., Blood 66:1384-1392 (1985).
LAMA-84 cells are a megakaryocytic cell line which expresses the platelet marker proteins GPIIb/IIIa. However, LAMA-84 cells do not express the platelet marker protein GPIb. In addition, the LAMA cell line is not committed to the megakaryocytic lineage, but rather represents an earlier stage in differentiation as shown by the fact that they are a tripotent, megakaryocytic, erythroid, and granulocytic cell line. Seigneurin, D., et al., Exp. Hematol. 15:822-832 (1987).
The mutant human megakaryocytic cell line, the HEL cell, does not express the beta subunit for GPIb platelet marker protein and contains an abnormally glycosylated alpha subunit for GPIb. Kieffer, et al., J. Biol. Chem. 261:15854-15862 (1986); Martin et al., Science 233-1235 (1982); Tabilio, A., et al. EMBO J. 3:453-459 (1984).
A promyelocytic leukemic cell line, HL60, responds to inducers of platelet synthesis such as 12-0-tetradecanoyl phorbol 2-acetate (TPA). However, HL60 cells respond to TPA by differentiating to either monocytes or granulocytes instead of inducing platelet production. Michalevicz, R., et al., Leuk. Res. 9:441-448 (1985).
Morgan, D. A., et al., J. Cell. Biol. 100:565-573 (1985) reported a series of human cell lines with properties of megakaryocytes which were isolated and cultured from peripheral blood. However, none of these cell lines are capable of differentiating to a cell with the characteristics of platelet late differentiation morphology, such as alpha granule formation.
The megakaryocytic cell line EST-IU expresses the platelet marker proteins GP IIb/IIIa on its membrane. Sledge, G. W.,et al., Cancer Res. 46:2155-2159 (1986). However, this cell line routinely dies after six months of continuous cell culture (30-35 cell divisions).
Thus, there remains a need for a purified megakaryocytic cell population, in which the culture conditions can be carefully manipulated and the results easily monitored, to study the process of megakaryocytopoiesis, to evaluate the effects of megakaryocytopoietic, hemopoietic and nonhemopoietic factors on the megakaryocyte system, to study platelet formation and release from the parent megakaryocyte (thrombopoiesis), to provide a source for the purification of megakaryocyte and platelet components, to identify new megakaryopoiesis factors from crude preparations and to serve as an assay system for the subsequent isolation and characterization of those new factors. |
Altitude and growth among the sherpas of the eastern Himalayas.
The results of the anthropometric survey of Sherpa children of both sexes (n = 478) from high- and low-altitude areas in the eastern Himalayas are presented. The study reveals that growth is slower both more prolonged in the high-altitude Sherpas compared with growth at low altitude and that Sherpa children are the smallest of all the high-altitude populations considered here. Sexual dimorphism is not well defined during the earlier age periods. Our skinfold thickness data from the low-altitude Sherpas corroborate the centripetal distribution of fat found elsewhere. |
Q:
ASP.NET 5 (vNext) causing a 500 - Internal Server Error on Azure
We are working on a project with the new ASP.NET 5 (vNext), EF7 and AngularJS and plan to deploy the WebApp on Azure.
I've created a new Web Application on Azure and published our project via Visual Studio 2015.
After publishing I'll get a 500 - Internal Server Error when I try to test our application.
I've already set <customErrors mode="Off" /> in the web.config in wwwroot without success.
I've then logged in via FTP and the "DetailedErrors" also do not contain any useful information.
The eventlog.xml contains following exception:
<Events><Event><System><Provider Name="ASP.NET 4.0.30319.0"/> <EventID>1309</EventID><Level>2</Level><Task>0</Task> <Keywords>Keywords</Keywords><TimeCreated SystemTime="2015-07-28T10:54:43Z"/> <EventRecordID>280293125</EventRecordID><Channel>Application</Channel> <Computer>RD000D3A202052</Computer><Security/></System><EventData> <Data>3005</Data><Data>An unhandled exception has occurred.</Data> <Data>7/28/2015 10:54:43 AM</Data><Data>7/28/2015 10:54:43 AM</Data> <Data>9df086471c304ebfa4ddddf9ca2a2b92</Data><Data>1</Data><Data>1</Data><Data>0</Data><Data>/LM/W3SVC/2082809257/ROOT-1-130825544829782705</Data><Data></Data><Data>/</Data><Data>D:\\home\\site\\wwwroot\\</Data><Data>RD000D3A202052</Data><Data></Data><Data>2108</Data><Data>w3wp.exe</Data><Data>IIS APPPOOL\\appname</Data><Data>InvalidOperationException</Data><Data>Couldn't determine an appropriate version of runtime to run. See http://go.microsoft.com/fwlink/?LinkId=517742 for more information.
at AspNet.Loader.RuntimeLocator.LocateRuntime(MapPathHelper mapPathHelper, Boolean& isCoreClr, String& relativeAppBasePath)
at AspNet.Loader.Bootstrapper.LoadApplication(String appId, String appConfigPath, IProcessHostSupportFunctions supportFunctions, LoadApplicationData* pLoadAppData, Int32 loadAppDataSize)
at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode)
at System.Web.Hosting.ProcessHost.System.Web.Hosting.IProcessHostLite.ReportCustomLoaderError(String appId, Int32 hr, AppDomain newlyCreatedAppDomain)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters) at System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags, PolicyLevel policyLevel, Exception appDomainCreationException)
It says that it couldn't determine an appropriate version of the runtime.
In the project properties the solution DNX SDK version is set to:
1.0.0-beta4
.NET Framework
x86
The frameworks set in project.json:
"frameworks": {
"dnx451": { },
"dnxcore50": { }
},
How can I get more information on the error?
A:
I was able to re-create your problem. Turns out that the publish wizard selects a runtime by default that doesn't match the selected DNX runtime of the app. You can fix this by going into the publish settings and selecting the correct Target DNX Version in the dropdown. In your case: the beta4 core-clr version.
After file > new project - my project.json looked like:
"frameworks": {
"dnx451": { },
"dnxcore50": { }
},
NOTE - it's using core - the core clr - not the full CLR. While setting up the publishing for this app - It jumped over the default settings here:
NOTE - by default it selected the dnx-clr, not core-clr. Beta5 version is correct though.
Publishing resulted in an Internal Server error:
I found the error description here:
NOTE: this requires the new Azure 2.7 SDK to be installed.
The interesting part of this message is:
<Data>DirectoryNotFoundException</Data>
<Data>Unable to find the runtime directory 'D:\\home\\site\\wwwroot\\..\\approot\\runtimes\\dnx-clr-win-x86.1.0.0-beta5-12103'.
Possible causes:
1. The runtime was not packaged with the application.
2. The packaged runtime architecture is different from the application pool architecture.
... </Data>
So I switched to the correct Target DNX version and it worked:
NOTE - the coreclr version.
|
While Toyota controls 45% of the domestic auto market in its home country, it hasn't managed to make sashimi out of BMW and Mercedes-Benz when it comes to the nation's luxury market. In an effort to change public perception, the automaker has sent its Lexus sales force to charm school, teaching them how to smile,… »7/09/07 7:45pm 7/09/07 7:45pm |
Samurai Headhunters
Since the eighth century, the Samurai have held a special place in Japanese history and culture. But over the years, legend has obscured the truth about how these elite knights actually lived, loved, fought, and died. Using rare 16th century war documents, we tell the story of a Samurai coming of age during that time. Witness his rise from peasant boy to foot soldier to ruthless warrior, all brought to life through stunning re-enactments and expert testimony from Japanese and martial arts historians. |
Q:
Cypher Truncate Chain
I would like to cut the last node from a chain p. I get p by a query like this MATCH p=(A)-[*0..]->(B)-[*1..]->(C). I need (C) to identify the correct chain but I do not wan (C) in the chain. Can I somehow remove it from the selection p? (I do not wanr to remove it from the graph, just from the selection p)
A:
If your original query looks like this:
MATCH p=(a:A)-[*0..]->(b:B)-[*]->(c:C)
RETURN p;
You can do this, instead, to get what you want:
MATCH p=(A)-[*0..]->(B)-[*]->(x)
WHERE (x)-->(c:C)
RETURN p;
|
Contact Us
Management Centre
Office opening hours
Our offices are now open again although a maximum of two visitors are allowed at any time and it is best to contact us by phone or email. Viewings are being conducted at vacant properties following social distancing guidelines with both agent and a maximum of two viewers wearing protective face masks. |
**To the Editor,**
We were very interested in the study by Moreira et al.^([@r1])^ as it reflects common and routine respiratory physiotherapy practices in intensive care units in Brazil and other countries. We appreciate the author\\'s effort in examining the evidence for this type of therapy. In this study, an improvement was observed in the ventilatory mechanics parameters after the application of a respiratory physiotherapy protocol in patients dependent on mechanical ventilation. The authors report a significant increase in dynamic pulmonary compliance, tidal volume, and oxygen saturation and a reduction in respiratory system resistance after application of the protocol. This protocol consisted of chest compression and vibration maneuvers, 0.9% saline instillation, and hyperinflation with a manual resuscitator, followed by endotracheal aspiration. However, we note the absence of a control group to help determine whether these gains were due to the use of the protocol and whether these gains could not be achieved with the endotracheal suction procedure alone.
According to the AARC Clinical Practice Guidelines - Endotracheal Suctioning of Mechanically Ventilated Patients with Artificial Airways,^([@r2])^ the decrease in peak pressure and airway resistance and the increase in dynamic compliance and tidal volume are expected and desired outcomes for endotracheal suctioning procedures and, therefore, a confounding factor for the effectiveness of the proposed therapy.
The authors also did not report any peak airway pressure controls during hyperinflation with the manual resuscitator, which may compromise the safety of the procedure. Peak pressures above 40cmH~2~O may be associated with alveolar overdistention and the risk of barotrauma, suggesting, for safety reasons, the use of manometers coupled to a manual resuscitator during these maneuvers.^([@r3]-[@r5])^
Ângelo Roncalli Miranda Rocha
General Intensive Care Unit, Hospital Geral do Estado Professor Osvaldo Brandão Vilela - Maceió (AL), Brazil; Hospital Escola Hélvio Auto - Maceió (AL), Brazil; Centro de Estudos Superiores de Maceió - Maceió (AL), Brazil.
Caio Henrique Veloso da Costa
General Intensive Care Unit, Hospital Geral do Estado Professor Osvaldo Brandão Vilela - Maceió (AL), Brazil.
**Conflicts of interest:** None.
AUTHORS\\' RESPONSE
RESPOSTA DOS AUTORES
Teixeira
Cassiano
Savi
Augusto
Department of Clinical Medicine, Universidade Federal de Ciências da Saúde de Porto Alegre - Porto Alegre (RS), Brazil; Intensive Care Unit, Hospital Moinhos de Vento - Porto Alegre (RS), Brazil.
Intensive Care Unit, Hospital Moinhos de Vento - Porto Alegre (RS), Brazil.
We would like to thank you for your comments. Several respiratory physiotherapy techniques are used in critically ill patients.^([@r6])^ However, the evidence for the use of multimodal respiratory physiotherapy is conflicting.^([@r7])^ Even the Clinical Practice Guideline of an important respiratory care society is based on the experience of specialists and not on the results of randomized clinical trials because of lack of evidence.^([@r8])^
Regarding the first question raised, our study protocol followed a sequence of techniques. After mobilizing the secretions from the distal airways using compression and vibration maneuvers and manual hyperinflation, the method to remove the secretions focuses on tracheal aspiration. This, in turn, only succeeds in removing the secretions located up to the third bronchial generation. Thus, it should be preceded by the techniques used in this protocol. It would be inconsistent to not aspirate secretions removed from the distal airways to the proximity of the endotracheal tube because this could lead to patient-ventilator asynchrony, respiratory distress, endotracheal tube obstruction, and increased respiratory effort.
As noted in the letter, the simple act of performing tracheal aspiration can improve respiratory mechanics. However, this procedure should not be performed systematically because it can lead to complications including a decrease in dynamic pulmonary compliance and functional residual capacity, atelectasis, hypoxemia, a bronchoconstrictor response, and arrhythmias.^([@r9])^ The practical guide itself reports that this procedure, which is frequently used in patients on mechanical ventilation, can cause such complications.^([@r8])^ Therefore, we believe that the improvements in mechanical ventilation occurred because of the sequence of the procedures, which culminated in tracheal aspiration.
In response to the second question, we did not use a manometer for monitoring during the hyperinflation maneuver with a manual resuscitator because our equipment has a safety valve that prevents the airway pressure from exceeding 40cmH~2~O; therefore, there is no need to monitor the pressure using a manometer.
Cassiano Teixeira
Department of Clinical Medicine, Universidade Federal de Ciências da Saúde de Porto Alegre - Porto Alegre (RS), Brazil; Intensive Care Unit, Hospital Moinhos de Vento - Porto Alegre (RS), Brazil.
Augusto Savi
Intensive Care Unit, Hospital Moinhos de Vento - Porto Alegre (RS), Brazil.
|
Q:
Find range of filled contents in Excel worksheet
I have an Excel 2016 Book.xlsm. In the worksheet testsheet, the cells in the range A1:Y150 are filled with text or number contents. The upper-left cell is always A1.
I am using python v3 xlwings to open the Excel file.
import xlwings as xw
Book_name = 'C:/Users/name/Book.xlsm'
sheet_name = 'testsheet'
wb = xw.Book(Book_name)
sht = wb.sheets[sheet_name]
How do I find out the range of cells that are filled with contents using python, which in this case is A1:Y150?
A:
If wb is defined as Excel Workbook, then this is a good way:
print (wb.sheets[sheet_name].api.UsedRange.Address)
|
Mandatory single embryo transfer policy dramatically decreases multiple pregnancy rates.
The aim of this study is to determine the effect of a mandatory single embryo transfer (SET) policy on pregnancy rates and multiple pregnancy rates in intracytoplasmic sperm injection (ICSI) cycles. One hundred and twenty-eight patients (154 cycles) underwent ICSI before the Turkish Ministry of Health's legislation, 55 patients (69 cycles) underwent ICSI after the legislation (day-3 embryo transfers only) and 35 patients (39 cycles) who underwent ICSI after the legislation (day-5 embryo transfers only). Groups were comparable regarding the women's mean age and body mass index. The number of embryos transferred (2.7 ± 0.5 vs 1.0 ± 0.0 vs 1.0 ± 0.0) was significantly higher in group I, compared to group II and group III. Clinical pregnancy per embryo transfer (40.8% vs 15.1% vs 36.1%) and live birth rate (37.7% vs 13.2% vs 30.6%) were significantly lower in group II when compared to group I and group III. Multiple pregnancy rates (39.6% vs 0.0%, vs 0.0%) were significantly higher in group I when compared to group II and III. Implantation rates were significantly higher in group III when compared to group I and group II. Miscarriage rates were comparable among the groups. The mandatory SET policy caused a dramatic decrease of multiple pregnancy rates. Mandatory SET with only day-3 embryo transfers decreased the pregnancy rates but this detrimental effect was not seen in cycles with day-5 embryo transfers. |
Cryptocurrency trading volume by country
Regarding exchange trading volume,. more analysis on cryptocurrency trading volumes. car coin country crypto currency dao decentral decentralized.How To Make Money Trading Cryptocurrency. With just a few dollars worth of Bitcoin you can start trading.
International cryptocurrency exchange BitBay to open India
Bleutrade Cryptocurrency Exchange | HTML5/BTC Trade Market
Kraken is the first Bitcoin exchange to have the trading volume and. email, country, phone number.A large number of both casual and institutional investors have invested in Bitcoin as a safe haven asset to avoid potential economic uncertainty and financial instability.Altcoin Charting Tools For Technical Traders. wealthier traders are flocking to the cryptocurrency.
Cryptotrader
Crypto Investing #2 – How To Use Volume & Liquidity When
Margin trading involves interest. volume and system availability may impact.BLKG Stock Message Board: Recent cryptocurrency exchange trading volume.Get free bitcoin trading signals and learn how to trade cruptocurrency on social trading platform.Previous Article A Story About Why You Should Never Lose Your Frame Next Article ROK Now Has A Little Brother: Reaxxion.com.
Global Bitcoin Trade Volume Surges on 15% Price Decline
LocalBitcoins Weekly Trading Volume Reaches new Heights in a few Countries. Not necessarily the regions one would associate with cryptocurrency trading, though.More importantly, ICOs enable anyone within the community to participate in the investment, providing opportunities for small-scale investors.
Bitcoin World on Twitter: "$11 Billion: 24-Hour #
Although bitcoin has been in existence for five years, most countries still do not have consistent laws regulating the cryptocurrency.The reality is that if trading were an easy, risk free way to make money, everyone would be a trader.Cryptocurrency markets are very strange, especially compared to traditional markets.After many months of delay, we have launched Red Kings Shop to provide you with ROK apparel that will Kratomize your testosterone levels, massively increase the size of your penis, and make you the most beloved shitlord in your city.
Does the United States government consider. viewed and regulated in their respective countries.
Coins - CryptoCompare.com - Live cryptocurrency prices
Some have invested in Ethereum because of its successful partnership strategy demonstrated by the Enterprise Ethereum Alliance.What are the most popular Chinese cryptocurrency exchanges by trading.Trade volume rankings for all cryptocurrency exchanges in the last 24 hours.
Startups that raise or complete successful ICOs often have their tokens listed on cryptocurrency market data providers such as CoinMarketcap and on exchanges.
Bitcoin Brokers: The Top 7 Trading Options Compared
Keep reading to learn everything you need to know about how Bitcoins work, how to pick an exchange, and how the blockchain technology behind Bitcoin really works.
Being a decentralized ledger, the Blockchain can never be controlled or manipulated by a single institution.Bithumb, hacked for. 13,000 bitcoins worth of trading volume.There are no broker fees, there are no middlemen to deal with, nor really any barriers to entry or red tape.Cryptocurrency news, information, and discussions about cryptocurrencies.P2P Blockchain Ethereum tokens trading at DECENTREX with no registration.
Many people feared to venture in cryptocurrency trading for.A Look At The Most Popular Bitcoin Exchanges. to be one of the largest exchanges going by the trading volume. country (headquarters), trading.Trading fees are 0.25...
Bitcoin Trading Volume Setting New Records in Several
Although analysts have raised concerns over the legality of ICOs and potential response from the US Securities and Exchange Commission, ICO, in theory, is a phenomenal method of raising funds for startups without intermediaries.Bitcoin over its almost eight years of existence has made steady upward mobility in most key metrics, whether it is transactions per day, value per Bitcoin.Check in on twitter and crypto forums daily, follow hash tags, see what people are talking about.The most basic but important thing to remember: Buy low, Sell high.What Are the Business Benefits of. and country-to-country transaction fees can bog down the process—and.
Once you have Bitcoin in your exchange account, you can start trading.Trade volume rankings for all cryptocurrencies in the last 24 hours.The Gnosis token was listed on major US-based Bitcoin exchange Kraken and within a month, it became the seventeenth largest crypto asset in the market.Follow the latest stock market trends and learn stock market statistics on.
The ethereum (ETH) price and bitcoin (BTC) prices are
China has witnessed an unprecedented rise in unregulated cryptocurrency trading,.Major cryptocurrency. shows the annual report of UK research and analysis company Investment Trends on the country. The U.S. Commodity Futures Trading. |
The Ultimate Guide To ?�律賓遊�?fella語�?學校 Fundamentals Explained
??Associated with direct income??pursuits (generating and conducting Kindy demo classes and functions). places??along the route, which the Mines and Geosciences Bureau attributed towards the fragility on the rock foundation, the deserted mining operations near the street plus the purely natural ground fractures that were undetectable inside the 190… Read More
Ximen Yeshi isn't technically a real evening market, but is a really lively procuring place appear sundown. It is One of the more fashionable marketplaces in Taiwan, and There is certainly frequently an abundance of acrobats and various Avenue performers. It is located at Ximen Station from the MRT inexperienced line.At last, if the character for "… Read More
Has total operational command on the language: ideal, exact and fluent with entire understanding.
Australia's immigration authorities have utilized IELTS to assess English proficiency of prospective migrants since May 1998, when this test changed the obtain: exam that were Earlier made use of.[89]
-#??u1109u1161u11bc??u1112u1161u11bc ??u1… Read More
Numerous Commonwealth nations around the world use IELTS scores like a evidence of potential immigrants' competence in English.[87]Conveys and understands only standard meaning in quite familiar conditions. Recurrent breakdowns in conversation manifest.Put together Achievements starts with IELTS Register on the web When you book your IELTS check To… Read More
A single Employee explained keep at carousel #one even though one other staff claimed to carry on to the other side. Had to operate from just one conclude to the subsequent just to examine on our baggage. Took PAL from Davao to Cebu, no dilemma. 25 Jan 2017 the promo webpage by clicking on any of your promo bulletins displayed to the homepage or t… Read More |
Fascinating 119 year old colour photos of Bedouin
Newly colourised photographs have emerged revealing the everyday lives of the native Bedouin people of the Arabian Peninsula around the turn of the twentieth century.
The vivid images, taken in 1898, give a fascinating insight into the culture of the ancient tribal group, many of whom fought alongside Lawrence of Arabia during the First World War.
Among the images is a Jericho ‘quack’ doctor, known as such because she lacked legitimacy to practice medicine, and a warrior on horseback carrying a traditional Az-Zayah hunting spear.
Among the images is a Jericho ‘quack’ doctor, known as such because she lacked legitimacy to practice medicine, and a woman in colourful traditional dress in southern Palestine.
The Bedouin formed the core of the army that Lawrence of Arabia assembled during the Arab Revolt of 1916-18 against the Ottoman Empire.
Lawrence of Arabia pictured at the turn of the 20th century
The tribesmen gave Arab forces under the command of local princes vital help during sieges of the imperial garrisons in Mecca and Ta’if.
Lawrence, who famously dressed as a Bedouin, admired the tribesmen for their bravery and fighting prowess.
Bedouin means ‘inhabitant of the desert’ and refers to the nomadic tribes who have historically lived in the arid regions of North Africa, the Arabian Peninsula, Iraq, and the Levant.
Many Bedouins started to abandon their nomadic lifestyle in the 1950s and 1960s to live and work in the cities of the Middle-East, particularly after the discovery of huge oil reserves beneath their feet .
However, modern-day Bedouins keep alive their ancient culture through maintaining the old clan structure and partaking in traditional music, poetry and dances.
Activities like camel riding and camping in the deserts are still popular leisure activities for urbanised Bedouins who live within close proximity to wilderness areas.
The colourised photographs are the work of bank technician Frederic Duriez, from Angres near Calais.
‘They show the Bedouins during the change,’ he said. ‘When modern governments took control of the original desert, many Bedouins were forced to abandon their traditional lifestyle and adopt an urban way of life.
‘The images date from 1898, and yet you would even believe in black and white that they are current. The Bedouin dress and customs has not changed much today.’
Mr Duriez explained why he chose to colourise these images and his favourite of the set.
‘I wanted to mount another aspect of the Arab world than the one that is showed today,’ he said.
‘My favourite image is the Bedouin couple in front of tent, Adwan tribe, because this was the longest to colourise and the most complex too.
‘It can range from two hours to colourise a simple portrait to seven or eight hours for an image with more details and character. In addition there is also historical research to undertake.’
Mr Duriez explained why he chose to colourise these images and his favourite of the set. ‘I wanted to mount another aspect of the Arab world than the one that is showed today,’ he said. ‘It can range from two hours to colourise a simple portrait to seven or eight hours for an image with more details and character. In addition there is also historical research to undertake.’ Pictured are two Bedouins in photographs all taken in 1898
The Bedouin formed the core of the army that Lawrence of Arabia assembled during the Arab Revolt of 1916-18 against the Ottoman Empire. The tribesmen gave Arab forces under the command of local princes vital help during sieges of the imperial garrisons in Mecca and Ta’if. Pictured are three Bedouin men in traditional dress in the American Colony of Jerusalem
Thomas Edward Lawrence, 1888 – 1935. British archaeologist, military officer, and diplomat. From Heroes of Modern Adventure, published 1927. He is best known for helping to unite the various Bedouin tribes during the Arab Revolt against the Ottomans |
Recurrent protein-losing enteropathy and tricuspid valve insufficiency in a transplanted heart: a causal relationship?
This case report describes a toddler who developed a protein-losing enteropathy (PLE) 4 years after orthotopic heart transplantation (OHT). He was born with a hypoplastic left heart syndrome for which he underwent a successful Norwood procedure, a Hemi-Fontan palliation, and a Fontan palliation at 18 months of age. Fifteen months following the Fontan operation, he developed a PLE and Fontan failure requiring OHT. Four years after OHT, he developed a severe tricuspid regurgitation and a PLE. His PLE improved after tricuspid valve replacement. It is now 2 years since his tricuspid valve replacement and he remains clinically free of ascites and peripheral edema with a normal serum albumin level. His prosthetic tricuspid valve is functioning normally. |
o be x(9). Calculate the highest common factor of o and 5.
5
Let f(i) = i**3 + 2*i**2. Suppose 4 = -a - 2*a + 5*t, 4*t = 4*a. Let w be f(a). What is the greatest common divisor of w and 40?
8
Let t be (-3)/(2 - 219/111). Let p = t - -161. Suppose -5*w + p = 2*q, 0 = -2*w + 5*w - 4*q - 56. Calculate the greatest common divisor of 30 and w.
6
Let q(f) = f**2 - 4*f + 6. Let s be q(4). Let n = -14 + 10. Let i(p) = p**2 - 3*p - 4. Let m be i(n). Calculate the greatest common factor of m and s.
6
Let v(w) = -w**2 + 6*w + 14. Let a be v(7). What is the greatest common divisor of 63 and a?
7
Suppose -s + 308 = s. Calculate the greatest common divisor of s and 14.
14
Suppose 0 = 4*x - 23 - 185. Suppose -z + 5*z - 142 = -5*d, 4*z = -2*d + x. Suppose g = -3*g + 24. What is the highest common factor of d and g?
6
Suppose 4*d - 4 = -5*z, 2 = 4*d - 2. Suppose z = 4*h - 5*f - 47, 2*h + 4*f - 7 = -3. Calculate the highest common factor of 16 and h.
8
Let t be (-3 + -2*(-19)/4)*8. What is the greatest common divisor of t and 208?
52
Let d = 1382 - 660. Let g = d + -480. Calculate the greatest common factor of g and 22.
22
Let y be ((-3*8)/(-4))/1. What is the greatest common divisor of y and 66?
6
Let i(k) = k - 5*k - 9 + 1 + k. Let n be i(-4). What is the highest common divisor of 32 and n?
4
Let y be (132/(-2))/2*38/(-57). What is the greatest common divisor of y and 88?
22
Let f be -3 + 2/(-1) + 1. Let y = 2 + f. Let r be 24/y*4/(-6). Calculate the highest common factor of r and 80.
8
Suppose 4*b - 19 = -3*z + 8*b, -2*z + 5*b + 1 = 0. Let u = z + 43. Let h(p) = 2*p. Let c be h(4). What is the greatest common divisor of u and c?
8
Suppose -u + 2 = m, 0 = -6*u + 3*u - 5*m + 6. What is the highest common factor of u and 14?
2
Let c = 68 - 47. Let g be 105/33 + (-2)/11. Calculate the greatest common factor of g and c.
3
Let w = 168 - 83. Calculate the greatest common divisor of 17 and w.
17
Let c(j) = 11*j + 1. Let h be c(1). Calculate the highest common factor of h and 96.
12
Let b be (22/6)/((-1)/(-15)). Let a(i) = i**3 + 12*i**2 - 2*i + 3. Let m be a(-9). Let l be (1/2)/(6/m). Calculate the greatest common factor of b and l.
11
Suppose -2*x - 3*x = -25. What is the greatest common divisor of 2 and x?
1
Let d(o) = -o**3 - 4*o**2 + 5*o + 3. Let n = 8 + -4. Let y be d(n). Let l = y + 155. Calculate the highest common divisor of 25 and l.
25
Suppose 0 = -4*f + 2 + 6. Let z = -5 - -5. Suppose -4*p + 5 - 1 = z. Calculate the highest common factor of p and f.
1
Suppose -5*f - 4*r + 1920 = 0, 3*f - 1536 = -f + 5*r. Suppose -7*j + 4*j = -o + 68, -5*o + f = -4*j. What is the greatest common factor of o and 10?
10
Let b(p) = p**2 + 13*p - 34. Let y be b(-16). Let n(g) = -22*g + 7. Let a be n(6). Let x = a - -181. Calculate the highest common factor of y and x.
14
Suppose -23 - 48 = -5*u - 3*q, 2*u = 4*q + 44. Let k be 3/2*u + 2. Calculate the highest common divisor of k and 208.
26
Suppose 29*a + 225 = 1791. Suppose -5*h - 1 = -4*b - 0, 0 = -2*b - 4*h + 20. Suppose -b*q + 3*q + 6 = 0. What is the greatest common factor of a and q?
6
Let n be 4/(-6) + 200/3. Calculate the highest common factor of n and 132.
66
Let w be (-2)/(-1*2 - -4). Let o(d) = d + 1. Let m(i) = 11*i + 8. Let j(y) = w*m(y) + 6*o(y). Let q be j(-4). What is the highest common factor of q and 2?
2
Let o be 3147/15 + 1/5. Let t = 44 - 23. What is the greatest common divisor of t and o?
21
Let o be 2/(-5) - (-3240)/100. What is the greatest common factor of 8 and o?
8
Suppose 3*z = 3*l - 9, 3*l = -3*z + 4 - 1. Let v be 1*1*88/l. Calculate the highest common divisor of 11 and v.
11
Let d(s) = s**3 + 20*s**2 - 23*s - 6. Let y be d(-21). What is the greatest common factor of 12 and y?
12
Suppose 2*d - 6*d = 8. Let l be -4*d/(24/3). Calculate the highest common factor of l and 1.
1
Let s be -3 + 1 - 10/(-1). Let k = s - -22. Calculate the greatest common divisor of 20 and k.
10
Let x = 36 - 26. Suppose -k + 3 = 0, 3 = 2*r + 3*k - x. Calculate the greatest common factor of 22 and r.
2
Let n(l) = -l**3 + 8*l**2 + 2*l - 11. Let c be n(8). Suppose r + 3*r = -c*h + 69, 2*r + h = 27. Calculate the greatest common divisor of 88 and r.
11
Suppose -2*v = 2*c - 1 - 3, 0 = 5*c - 5*v - 30. Suppose -2*u = -3*u + 5. Let y(j) = j - 3. Let f be y(u). Calculate the highest common divisor of f and c.
2
Let s(x) = -x + 9. Let l be s(-7). What is the highest common divisor of 128 and l?
16
Let n(t) = t + 5. Let u be n(-5). Suppose -6 = -r - 3*r - 2*m, -5*r - 3*m + 5 = u. Let c be (0 + -18)*(-1 - 1). What is the greatest common factor of r and c?
4
Let q be (-9)/(-4) - (-11)/(-44). Suppose 4*o - 38 = -u, -4*u - q*o = -3*o - 118. Let n be ((-13)/7 + 1)*-14. What is the greatest common factor of n and u?
6
Let u = -123 + 244. Let b = -61 + u. What is the greatest common factor of 15 and b?
15
Let y = 7 - 7. Suppose -5*x - 49 - 56 = y. Let v = 30 + x. What is the highest common divisor of 27 and v?
9
Let l be 16 - ((2 - 2) + 0). Let j(k) = 17*k - 4. Let g be j(4). Calculate the greatest common factor of l and g.
16
Let u(y) = -y + 9. Let z be u(7). Suppose -z*q - q + 84 = 0. Calculate the greatest common divisor of q and 7.
7
Let b(f) = -f**2 + 8*f + 8. Let p be b(7). Calculate the highest common factor of 165 and p.
15
Suppose -1457 = -5*w + 3*j, -4*j = 2*w - 8 - 580. Suppose -3*c + c + 8 = 0. Let l be c/6 - w/(-3). What is the highest common divisor of l and 14?
14
Suppose h - 10 = 3*i - 2, -5*i = -5*h + 20. Let n = -7 + 21. What is the highest common divisor of n and h?
2
Suppose 5*t - 45 = 2*t. Suppose 396 = 5*n - 2*n - 3*z, -5*n = z - 678. What is the greatest common factor of n and t?
15
Suppose 0 = 4*u + 5*l - 16 - 13, 0 = 3*u + 2*l - 20. Suppose -2*c = -c - u. Calculate the highest common factor of 4 and c.
2
Let n = -22 - -50. What is the greatest common factor of n and 14?
14
Suppose 26 + 74 = 4*m. Suppose 0*q - 512 = -5*x + 3*q, x + q = 96. What is the highest common divisor of x and m?
25
Let c = 16 - 12. Suppose 3*f - 15 = -3*l + 45, 16 = c*l. What is the highest common divisor of f and 40?
8
Suppose 2*m + i - 10 = 0, 4*m + i = 29 - 7. Let w be (-508)/(-14) - 4/14. Let v = 90 - w. What is the greatest common divisor of v and m?
6
Let a = 59 + -42. What is the highest common factor of a and 187?
17
Let w be (-3)/2*(-2)/3. Let q be (0 - -2) + w + 3. What is the highest common factor of q and 24?
6
Let w = 3 + 9. Suppose h + 0*h + t = 19, -5*h + 87 = 3*t. Suppose 4*k - 63 = -h. What is the highest common divisor of k and w?
12
Let p(h) = -3*h - 15. Let b be p(-11). Calculate the greatest common factor of 12 and b.
6
Let j = 34 + -23. What is the greatest common divisor of j and 1?
1
Suppose -4*b + 1844 = -0*b. Suppose 3*x + 1315 = -2*x. Let r = x + b. What is the highest common divisor of r and 18?
18
Let b be (-18)/15*(-80)/6. Suppose -3*a + 0*y + 8 = -4*y, -3*a + b = -2*y. Calculate the greatest common divisor of a and 12.
4
Let n = -23 + 53. Calculate the highest common factor of 270 and n.
30
Let u be (-5)/(-2)*(1 - -5). Let d = u - 9. Calculate the highest common factor of d and 48.
6
Let u be ((-4)/(-5))/((-8)/(-320)). What is the highest common divisor of 8 and u?
8
Suppose 0 = 5*c + 28 - 133. What is the greatest common factor of c and 21?
21
Let o be 32*(-1 + 2) - 0. Let r be (3 - 10/4)*16. What is the greatest common factor of o and r?
8
Let o be 224/(((-6)/4)/(-3)). Calculate the highest common factor of o and 56.
56
Suppose 8*w + 2*w = 1080. What is the greatest common divisor of 72 and w?
36
Suppose 5*g - 10 = -0*g. Suppose -5*j + 2*j + 174 = g*k, 0 = j + k - 57. What is the highest common divisor of j and 15?
15
Suppose -9*f = -4*f + 90. Let l be (-144)/f - (1 - -1). Let r be (0 + (-1 - -2))*15. What is the greatest common factor of l and r?
3
Suppose -3*z - 4*q + 35 = -0*z, -q = -5*z + 20. Suppose 2*g + y - 2 = 4, z*y = 5*g. What is the highest common divisor of 16 and g?
2
Suppose -2*a - 2*a + 204 = 0. What is the greatest common factor of 17 and a?
17
Let d(w) = -17*w - 38. Let o be d(-10). Calculate the highest common divisor of 12 and o.
12
Let t = -13 + 26. Calculate the greatest common divisor of 117 and t.
13
Suppose 2*t - 1 = -3*p, 0*p - 4*t = 3*p + 7. Calculate the greatest common factor of 3 |
Horoscope 2018 for Gemini
Sun in Gemini, May 21 - June 20
General: Overall, the beginning of the year promises some struggles. You have some cleaning up to do in your personal life before you can work on other things. You'd rather not focus on this stuff; you'd rather see yourself as above all the drama that being human brings. The truth is that you're very much in it, and that you will have to deal with the stickiness of relationships and interpersonal problems in the early part of the year before you can consider yourself an enlightened being.
Mid-spring to the end of summer brings you to the end of a journey where you no longer feel like a victim, and therefore no longer need to hide from everyone else. Sure, you still want to exist on another plane, but you don't feel as vulnerable when out in the world. You can be part of it again. You can trust yourself to make decent financial decisions. You may have a hard time controlling your appetite, but most other things should be in control.
Autumn and winter see you focusing more on what you personally need to feel secure. You have to take steps to learn how to take care of yourself better. You find a voice and courage to let people know where and how you're hurting and what you need in order to get better. This may be because over the fall, you end up in a bad way somehow, probably because you refused to look at your issues head on. However, once that passes, you have a renewed motivation to heal yourself.
Love: If you're in a committed relationship, it may be tested in the early part of this year. Hardships and power struggles may be a key feature and while it isn't certain to break up the relationship, it may come out with a few scars. This may be the time where you realize how much work needs to be done on this relationship, and decide whether or not you want to keep going and work at it or call it a night and part ways.
The spring eases up, but you may take a break from love to recover. Of course, you don't realize you're flirting or attracting attention, but others are noticing you. This may be a little overwhelming. Of course, the opposite may be true, and you could very well end up chasing ill-conceived romances and trying to date people who are all wrong for you. Whatever you do, don't give any of these people money. Don't loan it to them, don't let them rely on you financially.
Over fall and winter, any close relationships you have may be a labor of love. You're prone to letting others take advantage of you. Yes, you do want to show people that you love them, but you need not be their doormat to do so. As fall turns into winter, you'll start getting back what you've given. It's possible that winter brings you a new love, romance, marriage, or some other type of partnership that will continue to blossom in the new year.
Money: The first part of the year has you focused on your joint resources. You may be stuck in the past, thinking and rethinking about the money you had and lost, especially if you feel cheated by an old lover or business partner. Check your credit score and make sure you understand everything before you sign any contracts. The money you will make is going to come from your own hard work, and there is no getting around it in the early part of the year.
Through spring and summer, you're vulnerable to being tricked out of your money, so hold tight. Don't let anyone guilt you out of it. It's tempting to want to save someone else by buying them out of their troubles, but this will only make things worse. It's best to hold onto what you have and wait. This isn't the time to get financially involved with anyone else, even if it seems like a good idea, even if they lay the guilt on extra thickly.
Money may be an issue in the fall and winter, in that you might not have enough because someone else needs it. Illness, accidents, or the like may eat up a chunk of your savings or funds. Your family members may be affected, too. However, necessity is the mother of invention, and you figure out a way to make more money that just might turn into a lucrative side gig during the winter and during 2019. This gig may be completely unrelated to what you do for a living now, but something you've always wanted to do or put out in the world.
Career: The first part of the year still has you wondering about where you stand in the world and what you can become. There may be a disappointment early on, like getting passed over for promotion, or getting the promotion but being disappointed with it. You can work very hard and achieve a lot of things, but you may be wasting it on the wrong people, hoping to get something that just isn't there. By April, you'll see how much your work is worth to them.
And from April to August, you start to feel less confused and less disappointed with your lot in life. If you haven't already, you may figure out a way to turn your liabilities into assets. You can now teach others how to overcome their own insecurities, for example. Simply put, your experience puts you in a position to help others, and if you're sincere, and if your experiences are not your own doing, there could be a way to monetize your knowledge.
From August through December, this urge to teach gets stronger. Coaching, writing, or simply mentoring others is going to be very attractive to you, and possibly lucrative if you allow yourself to make a business of it. It's possible that you become more drawn to a career that fulfills your soul, and if you start it as a side gig, you may be able to establish it as your full time job in 2019. |
Brainstorm on the inputs, not on the outputs, which means that you should determine the elements that directly affect the area you want to improve. Those paid and organics incentives are SEO and SEM. Link building is still an important part of your company’s SEO strategy. Instead of trying …
One day it’s perfectly OK to work in a certain way, the next day a search engine algorithm change destroys your rankings. Luckily it’s usually possible to recover your visibility, or at least some of it, whether it’s a matter of asking for a reconsideration or changing …
Great content is also important from an SEO perspective, which is why we provide content writing services for all of our full service B2B SEO clients. To Google, site speed is a critical ranking factor. Google believes that faster loading pages yield better user experience (UX) which is a prerequisite …
Another term closely linked with SEM is PPC or pay-per-click (not to be confused with paper clips!). The shift to mobile devices has caused Google to change the methodology behind how it indexes and ranks websites. Write locally oriented content: It’s one of the best things you can do …
Optimize the content of your web pages to make sure that it aligns with searcher intent. Your website should contain many pages tackle the topic of your website from different angles. Links are supposed to be used for “citation” purposes, like footnotes in your high school essays. But it will …
When people hear about how great you are, they head to Google to discover more. Blog commenting, forum posting, article submissions, Web 2.0 creations, directory submissions, social bookmarking and guest posting are some of the major ways to create quality backlinks. A popular practice,
and thought to be best …
You can also include “no follow” tags on the internal links to these faceted urls to reduce the chances of getting them crawled. Basically, you are creating a piece of content that goes above and beyond to answer all the questions your reader will have on a single topic. Great …
Some webmasters attempt to improve their pages’ ranking and attract visitors by creating pages with many words but little or no authentic content. There is no one-size-fits-all solution for a search engine optimization strategy. Every company, whether large or small, needs to first set some specific business goals and objectives …
The Web search engines make use of a Crawler or Spider that crawls the website content. Simply making sure that your graphic design team and photographers save all their images as ‘save for the web’ in Photoshop and Illustrator, and making sure that the physical size of the image is …
Although most keyword tools list numbers, you shouldn’t think of them as exact counts. Rather, you should think of them as a way to compare terms on a relative scale. Is your web content long enough? Does it have enough substance—does it add enough value? If it is …
What you don’t want to see is content that is too short, doesn’t deliver the information you’re looking for, and contains too many keywords. SEO ranking is one of the key to online marketing success. There are two different types of backlinks namely, external and internal backlinks …
Infographics are candies for our brains. They’re engaging, catch our attention and make information consuming faster. In fact, infographics are liked and shared 3x more on social media than any other type of content. Google is getting better every year at detecting spammer and low-quality links. Being proactive about …
Keep your site clean by continually monitoring Google Search Console (formally known as Google Webmaster tools). You shouldn't feel comfortable using robots.txt to block sensitive or confidential material. One reason
is that search engines could still reference the URLs you block (showing just the URL, no title or
snippet …
A page that doesn’t meet the needs and expectations of users will never achieve strong user signals. Because it’s not answering the right questions or providing the right information, users will quickly abandon your page in search of a more useful one. DA/PA and CT/TF are …
It is not possible for a crawler to see daily if any new page appeared or any existing page is updated, some crawlers may not visit a webpage for a month or two. A 2016 WordStream survey found that 72 percent of people who searched online for something local visited …
The Google Trends tool will show you how much interest there is in a particular keyword as well as what caused the interest. Google does everything in its power to reduce the effectiveness of black hat SEO and discourage disreputable agencies from using this type of strategy. However, while the …
Google’s SEO Starter Guide states that, “If your URL contains relevant words, this provides users and search engines with more information about the page than an ID or oddly named parameter would.” In other words, including keywords – or at least clear and direct information – in your URL is a …
Google is looking to reward sites that offer a great experience on a phone. They even now have separate search results for phone users. How do we know if we are mobile friendly? There’s a handy tool to check. Test your site, and work with your designer to make …
Whenever your URL changes, think of what you may have going on that’s attached to those URLs. It’s important to have breadcrumbs on your website. They show users how a page fits into the structure of a site, and allow search engines to determine the site’s structure …
In establishing domain names, you may come across articles, which may suggest not to make use of hyphens on them. You should become aware though that using hyphens on your domain name is not a bad practice. However, you should know that most people these days, are not used to …
Don’t try to act smart. There are some plugins which allows you to add no follow in external links. If you have applied no follow on the link of your blog, it will be given the priority. You cannot use do follow and no follow simultaneously. This won’t …
According to Industry thought leaders, that there are numerous ways in which poor internal linking can degrade the ability of search engines to index your web pages. These include linking to pages that can’t be accessed without filling in a form and even linking to content using JavaScript or …
You’ve seen what links you and your competitors have. Now it is time to start building up your profile. As with any strategy, you need to set goals, and make sure it aligns with your overall marketing focus. Link building is definitely one of the most important aspects of …
Businesses that stick to the old SEO trick of “stuff a post with keywords and use metadata” will get left behind. SEO isn’t about tricking Google or working with spiders – it’s about creating content that wants to get read by your customers. Keep an eye on organic search …
By sharing content that you post on social media as well as sharing your website content people are showing that your website is a valuable resource that they are happy to tell their friends about. Another effect of this is that people are more likely to link to your content …
SEO and content marketing both bring important elements to the online marketing table, and you need them both to really grow your business. It’s often a really big and complex picture that needs to have everything working together to deliver the best results. Google has always encouraged webmasters to …
There are plenty of rabbit holes to fall into when it comes to Google algorithm updates. Don’t be scared by the acronym SEO, it just allows sites like Google to read your website and rate you in terms of how relevant you are to the users search terms. With …
Any link to another part of the same site is called an internal link. As well as links you'd expect to find (within a site menu bar, for example) you can also create internal links by linking to past posts within newer ones. Keywords must be used at least three …
Links within long-form, evergreen content are also more valuable than links in short, news-based posts. If you website is new, however, or you have a low domain authority, just posting a blog to your site is going to have hardly any benefit from an SEO perspective. If Google and the …
Every search engine was built with a different system with unique preferences and features, making each of them beneficial in their own way. But they all have two major functions: they index content and turn over relevant results when a search is processed. As much as it's important to try …
Social media platforms have all but taken over, and when it comes to SEO, they are a huge part of the strategy for any campaign. Links aren’t the only factor of importance; you also need to make sure your on-page SEO is up to scratch and that your keyword …
When SEO isn’t done right it can be the cause of wasted time, money and a lot of frustration for many businesses. They might be ranking well for some keywords, but those keywords aren’t actually bringing in qualified traffic, leads or buyers. Research other business websites and encourage …
There is no single place to focus when trying to optimize a website for search engines, so it is important to make updates and upgrades to various parts of a website at regular intervals, not spending too much time or effort on any particular aspect of the website while other …
There’s no point in creating new content if it’s not authentic enough to stand out. Even if you come up with an idea from a different source, it’s still up to you to offer your unique perspective that will add value to the particular topic. Vertical search …
One of the best tools available to help you measure the success of your SEO campaign is Google Analytics. This fantastic and free tool can be used to monitor how much traffic you’re getting to your website and where this traffic is coming from. You can easily identify pages …
Search engines constantly tweak their algorithms to try to balance relevancy algorithms based on topical authority and overall authority across the entire web. Sites may be considered topical authorities or general authorities. To fully make the most of local SEO, you need to ensure that all of your content is …
The search engines will act rapidly to penalize sites that contain viruses or trojans. Keep your descriptions under 150 characters to avoid a trail off (…). This shows visitors that you have intentionally put up an informative and helpful description, opposed to the randomly generated one that Google pulls for non-optimised …
You can stop improving certain pages, but the site as a whole can always be improved. The SERP landscape is constantly changing, with Local listing and OneBoxes popping up everywhere. Ranking number one has obviously never been a walk in the park. But now we’re dealing with a much …
Follow all the good on-page content and off-page optimization considerations, like fresh, unique content. Don’t overdo the keywords, prevent spammy backlinking etc. Competitor analysis is nothing new, and companies have been researching their competitor’s links for years. However, by looking at the competitor’s backlinks and manually reviewing … |
Germany referred to ECJ for airport security oversight breaches
The European Commission has referred Germany to the European Court of Justice (ECJ) for failing to regularly monitor all aviation security measures at some German airports.
The Commission said that, on inspection, some security measures were not adequately monitored by the national authorities and that Germany did not comply with the minimum frequency and scope of controls required under EU legislation.
It said these controls were “necessary to quickly detect and correct potential ... |
Introduction {#s1}
============
*Neisseria meningitidis* is the causative agent of meningococcal meningitis and septicaemia. Its only known host is the human, and it may be carried asymptomatically by approximately 10% of the population [@pone.0072003-Dietrich1]. There is currently no vaccine that is effective in all age groups and against all serogroups, despite extensive research efforts [@pone.0072003-Jodar1] [@pone.0072003-Vermont1]. Candidate antigens for inclusion in a vaccine must be expressed in the majority of strains, be antigenically conserved, and elicit a protective immune response. The sequencing of the genomes of a number of meningococcal strains has facilitated the identification of novel antigens by a bioinformatic approach [@pone.0072003-Pizza1]. Analysis of genes up-regulated after contact with epithelial cells [@pone.0072003-Grifantini1] or endothelial cells or serum [@pone.0072003-Dietrich1] using micro-arrays has identified more potential vaccine candidates. This is in addition to all the protein and carbohydrate antigens investigated prior to genome sequence data became available [@pone.0072003-Vermont1]. In previous work we reported the sequence of NhhA in *N. meningitidis* [@pone.0072003-Peak1], which is a homologue of the adhesins Hia and Hsf of *Haemophilus influenzae* [@pone.0072003-Barenkamp1], [@pone.0072003-StGeme1], [@pone.0072003-StGeme2]. NhhA has also been referred to as Hsf [@pone.0072003-Weynants1] and Msf [@pone.0072003-Griffiths1]. This protein prevents complement attack [@pone.0072003-Sjolinder1] by binding vitronectin [@pone.0072003-Griffiths1], and purified NhhA stimulates a proinflammatory response [@pone.0072003-Sjolinder2]. NhhA is a strong vaccine candidate because of the ubiquitous presence of the *nhhA* gene in meningococcal strains, its surface exposure and the high level of sequence conservation between strains (amino acid identity 85.3%--99.8%; [@pone.0072003-Peak1]). The majority of the sequence variation that does exist is limited to four distinct variable regions (V1--V4) located in the first 200 amino acids of the mature protein [@pone.0072003-Peak1]. Unlike many other outer membrane proteins of *N. meningitidis*, there are no obvious sequence features such as short tandem DNA repeats in or upstream of the gene that may mediate phase variable expression of NhhA. Furthermore, this protein is recognized by antisera from patients [@pone.0072003-vanUlsen1] [@pone.0072003-Litt1], implying that it is expressed and immunogenic *in vivo*. We now report further investigations of NhhA as a vaccine candidate, by characterizing the murine antibody response against a wild-type NhhA, and against truncated forms of NhhA and show that removal of the region of most variation does not prevent production of bactericidal antibodies.
Materials and Methods {#s2}
=====================
Bacterial culture {#s2a}
-----------------
*Escherichia coli* were cultured in LB media, and *N. meningitidis* were cultured on BHI agar overnight at 37°C. Kanamycin and ampicillin were added at 100 µg/mL. Tetracycline was added at a final concentration of 300 µg/mL and 15 µg/mL to select *E.coli* and *N. meningitidis* respectively. Strains used in this study are listed in [table 1](#pone-0072003-t001){ref-type="table"}.
10.1371/journal.pone.0072003.t001
###### Strains and plasmids used in this study.
{#pone-0072003-t001-1}
Bacterial strains: Genotype/relevant characteristic NhhA phenotype Reference
-------------------- ------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------------
PMC21 *N. meningitidis*, serogroup C Expresses wild-type NhhA~PMC21~ This study
¢3 *N.meningitidis*, derived from MC58, acapsulate (Δ*siaD::erm*), Opa - variant Expresses wild-type levels NhhA~MC58~ [@pone.0072003-Virji1]
2A *N.meningitidis*, ¢3 derivative, *nhhA::kan* NhhA expression abolished [@pone.0072003-Peak1]
¢3*lgtA* ¢3 derivative, *lgtA::kan* Expresses wild-type levels of NhhA~MC58~ [@pone.0072003-Jennings1]
7G2 *N.meningitidis*, ¢3*lgtA* derivative LOS phenotype fixed L8 Expresses wild-type levels of NhhA~MC58~ This study
P6 7G2 derivative, *porA* replaced by *nhhA~PMC21~* Over-expression of NhhA~PMC21~, wild-type levels of NhhA~MC58~ This study
P6ΔOpcA P6 derivative. *Δopc* Over-expression of NhhA~PMC21~, wild-type levels of NhhA~MC58~ This study
PΔ5 7G2 derivative, *porA* replaced by *nhhA~Bgl~* Over-expression of NhhA*~Bgl~*, (Bgl II deletion) wild-type levels of NhhA~MC58~ This study
PΔ5ΔOpcA PΔ5 derivative, *Δopc* Over-expression of NhhA*~Bgl~*, (Bgl II deletion) wild-type levels of NhhA~MC58~ This study
PSO1 7G2 derivative, *porA* replaced by *nhhA~SO1~* Over-expression of NhhA~SO1~, (splice-overlap deletion) wild-type levels of NhhA~MC58~ This study
PSO1.17A PSO1 derivative, *Δopc::tet* Over-expression of NhhA~SO1~, (splice-overlap deletion) wild-type levels of NhhA~MC58~ This study
**Plasmids**
pC014K *porA* gene with kanamycin resistance gene cloned downstream [@pone.0072003-vanderVoort1]
pIP52(PMC21) *nhhA~PMC21~* cloned into pC014K This study
pIP52(PMC21Bgl) pIP52(PMC21) derivative: *Bgl*II deletion of *nhhA~PMC21~* This study
pIPSO1 Splice overlap deletion of *nhhA~PMC21~* cloned into pC014K This study
pBE501 Plasmid contains *opc* gene [@pone.0072003-Olyhoek1]
pIP14 *opc* deletion plasmid: *Hind*III/*Sty*I digest of pBE501 This study
pOpcTet pBE501 derivative, *opc::tetM* This study
pT7lgtAG7 This study
pMJ1b11 Contains *lgtABE* locus [@pone.0072003-Jennings1]
Over expression constructs {#s2b}
--------------------------
Plasmids used in this study are listed in [table 1](#pone-0072003-t001){ref-type="table"}. To overexpress a wild-type protein, the *nhhA* ~PMC21~ gene (accession number AF157611) was amplified using primers HOMP5\\' and HOMP3\\'AN ([table 2](#pone-0072003-t002){ref-type="table"}). The amplimer was digested with *Eag*I and *Nco*I restriction endonucleases, and ligated into pCO14K, generating pIP52(PMC21) ([Fig. 1](#pone-0072003-g001){ref-type="fig"}). To overexpress a protein with most of the interstrain variable region deleted, products were amplified from chromosomal DNA or PMC21 using the primer pair HOMP5\\' and NH3\\'BG. The amplimer was digested with *Bgl*II and *Nco*I, and cloned into pIP52(PMC21) digested with *Bgl*II and *Nco*I. The resultant plasmid was named pIP52(PMC21Bgl) ([Fig. 1](#pone-0072003-g001){ref-type="fig"})
{#pone-0072003-g001}
10.1371/journal.pone.0072003.t002
###### Oligonucleotides used for generation of variant NhhA, and lgtA amplifications.
{#pone-0072003-t002-2}
Oligonucleotide
----------------- --------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HOMP5\\': 5′-CAA TTA A[CG GCC G]{.ul}AA TAA AAG GAA GCC GAT **ATG AAC AAA ATA TAC CGC ATC**-3′; This contains a *Eag*I restriction site (underlined) and the sequence encoding the first 7 (seven) amino acids of NhhA (bold type)
HOMP3\\'AN 5′-TGG AAT [CCA TGG]{.ul} **AAT CGC CAC CCT TCC CTT C**-3′ This contains a *Nco*I restriction site (underlined) and the reverse complement of sequence 48--61 nucleotides past the end of the *nhhA* open reading frame of PMC21 (bold type)
NH3\\'BG: 5′-GGT C[AG ATC TGT]{.ul} **TTC ATT GTT AGC ACT TGC**-3′ This contains a *Bgl*II restriction site (underlined) and the reverse complement of sequence encoding amino acids 134, (double underlined) and 49--54 of wild-type PMC21 NhhA (bold type).
SO-C 5′-[GAC GAA ATC AAC GTT]{.ul} **CTT AGC ACT TGC CTG AAC CGT TGC**-3′. Reverse complement of sequence encoding amino acids 237--241 at the start of the C5 region (underlined) and amino acids 45--52 at the end of the C1 region (bold type) of wild-type NhhA of strain PMC21.
SO- D: 5′-[AAC GTT GAT TTC GTC]{.ul} CGC ACT TAC-3′ encodes amino acids 237--244 at the start of C5 (underlined indicates reverse complement of Primer SO-C)
Lic31ext 5′- CCT TTA GTC AGC GTA TTG ATT TGC G --3′ Used to mutate poly-G tract of *lgt*A and to amplify mutant *lgtA* for transformation
lgtAG3 5′-ATC GGT GCG CGC AAT ATA TTC CCC CCC GA CTT TGC CAA TTC ATC -- 3′ Used to mutate poly-G tract of *lgt*A
Lic16ext 5′- CGA TGA TGC TGC GGT CTT TTT CCA T -3′ To amplify mutant *lgtA* for transformation
Splice-Overlap PCR (8,9) was used to generate pIPSO1: Oligonucleotide primers HOMP5\\' and SO-C were used to amplify constant region 1 (C1) and primers SO-D/HOMP3\\'AN to amplify constant region 5 (C5). Primers SO-C and SO-D contain complementary sequences. The two products were annealed and subsequently re-amplified using primers HOMP5\\' and HO3\\'AN. The resulting amplimer, encoding amino acids 1--52 and 337--591 of wild-type NhhA of PMC21, was digested with *Eag*I and *Nco*I, and ligated into pCO14K, generating plasmid pSO1.
Unmarked and marked deletion of opc {#s2c}
-----------------------------------
The plasmid pBE501 contains the *opc* gene and flanking regions [@pone.0072003-Olyhoek1]. *Hind*III cuts pBE501at −41 relative to ATG start, *Sty*I cuts 570 bp downstream of that (gene is 861 bp). pBE501 was digested with *Hind*III/*Sty*I, blunted, and self ligated, deleting the majority of the gene including the start codon and part of the promoter region. The resulting plasmid was named pIP14. To create a marked deletion, pBE501 was digested with *Sty*I, deleting 327 bp, and the tetracycline resistance determinant was excised from pGEMTetA [@pone.0072003-Warren1] and cloned into the blunted *Sty*I-digested pBE501. The resulting plasmid was named pOpcTet.
Fixing LOS expression {#s2d}
---------------------
In order to fix the expression of the phase variable *lgtA* gene to "off", so that the L8 immunotype was expressed, the homopolymeric tract of the *lgtA* gene was altered so that only 7 G residues remained in the homopolymeric tract region. This results in a frame shift mutation and no expression of LgtA activity (the wild type strain, MC58, has 14 G; [@pone.0072003-Jennings1]). Using primers Lic31ext and lgtAG7 in PCR with *Neisseria meningitidis* strain MC58 chromosomal DNA as template the region encompassing the poly-G tract to be altered was amplified. The lgtAG7 primer incorporated the change in the *lgtA* sequence from 14G to 7G. The resulting amplimer was cloned into pT7Blue (Novogen), to create plasmid pT7lgtAG7. To reconstitute the complete *lgtA* gene so that the plasmid could be used to transform the new allele into *Neisseria meningitidis*, a *Bss*HII fragment from plasmid pMJ1b11 [@pone.0072003-Jennings1] was cloned into the BssHII site of pT7lgtAG7 in the correct orientation. Nucleotide sequence analysis confirmed the correct orientation of the gene and that the sequence segment was identical to the corresponding section of the wild-type *lgtA* gene apart from the alteration of the homopolymeric tract from 14 to 7 G residues. In order to transfer the *lgtAG7* mutation to the chromosome of *Neisseria meningitidis* to make a mutant strain, the plasmid pT7lgtAG7 was linearized and used to transform *Neisseria meningitidis* strain ¢3*lgtA* (containing an *lgtA::kan* mutation, [@pone.0072003-Jennings1]). Confirmation of the transfer of the *lgtAG7* allele to the chromosome in kanamycin sensitive colonies obtained from the transformation was confirmed by PCR of the relevant section of the *lgtA* gene using primers Lic31ext and Lic16ext, followed by nucleotide sequencing with the same set of primers.
Nm transformation and screening {#s2e}
-------------------------------
The plasmids were linearised by restriction digestion and used to transform *N. meningitidis* using the method described by Janik *et al*. [@pone.0072003-Janik1]. Transformants were selected by overnight incubation at 37°C in 5% CO~2~ on solid media containing antibiotic as appropriate.
Screening for NhhA overexpression and other protein electrophoresis {#s2f}
-------------------------------------------------------------------
Following transformation with linearised pIP52(PMC21), pIP52(PMC21Bgl), or pIPSO1, kanamycin resistant colonies were selected, subcultured overnight and screened for over-expression of NhhA by separating total cell proteins electrophoretically on duplicate 10% acrylamide gels, followed by coomassie Blue staining or were Western blotted to nitrocellulose (BioRad) before block in skim milk/PBST followed by sequential incubation in rabbit polyclonal anti-NhhA sera [@pone.0072003-Peak1], goat anti-rabbit alkaline-phosphatase, and detection with NBT/BCIP substrate (BioRad). The *nhhA* allele was sequenced to confirm replacement of *porA* with the PMC21 or truncated allele. For observing overexpression, sarkosyl-insoluble proteins were separated electrophoretically using Bis-Tris buffer system and 4--12% precast 8 cm gels with MOPS buffer (Invitrogen), or for western immunoblot proteins were separated using Tris-Acetate 3--8% pre-cast gels (Invitrogen).
Screening for removal of Opc expression {#s2g}
---------------------------------------
Antisera containing Opc-specific polyclonal rabbit antibodies were raised by immunizing rabbits with sarkosyl-insoluble OMCs of strain P6 (Opc-expressing strain). Serum was adsorbed against strain P6ΔOpcA as previously described [@pone.0072003-Power1], removing most antibodies except those recognizing Opc. Following transformation with linearised pIP14 (unmarked deletion construct of *opc*), bacteria were plated at low density. After overnight incubation, colonies were transferred to nitrocellulose (BioRad) and immunoblotted using rabbit Opc-specific sera, and antibody binding was visualized using goat anti-rabbit-Alkaline phosphatase and colorimetric detection with NBT/BCIP. Colonies that had lost Opc reactivity were identified and subcultured, and analysed by immunoblot. Southern blot analysis (using DIG-labelled *Hind*III/*Sty*I fragment or *opc* gene from pBE501 as a probe) was used to confirm that the lack of expression was due to the introduced mutation and not to the inherent phase-variation of this gene, and PCR confirmed the presence of the deleted *opc* allele. Anti-Opc sera were also used to confirm loss of Opc expression following transformation with pOpcTet and selection with tetracycline.
Protein sequencing {#s2h}
------------------
Sarkosyl-insoluble proteins (enriched for outer-membrane proteins) were separated by SDS-PAGE (7.5% acrylamide) and transferred to PVDF membrane. The PVDF membrane was stained with Coomassie Blue and the high molecular weight protein excised and subjected to N-terminal sequencing in a PE Biosystems 492cLC protein sequencer.
Culture and preparation of OMVs {#s2i}
-------------------------------
A vial of frozen *N. meningitidis* (recombinant or not) was thawed and streaked onto modified Frantz medium agar plate which was then incubated at 37°C for 18 h. Colonies were resuspended and added to a flask containing modified Frantz medium supplemented with the appropriate antibiotic, and incubated at 37°C for 16 h under shaking. The cells were separated from the culture broth by centrifugation at 5,000 *g* at 4°C for 15 min. OMVs were isolated using deoxycholate as described previously [@pone.0072003-Fredriksen1].
Mice and immunizations {#s2j}
----------------------
Outbred OF1 mice (Charles River, Lyon, female, 6--8 wks of age, also known as CF1) received three injections with OMVs via intramuscular route on days 0, 21 and 28. With each injection of 50 µl, 10 µg of antigen formulated onto 100 µg of Al^3+^ (aluminum hydroxide) was administered. Control mice received adjuvant only. Blood samples were collected 14 days after the third injection.
### Ethics {#s2j1}
Mice were provided with food and water ad libitum and were monitored for adverse events following injections. Mice were anaesthetized prior to collection of blood from carotid artery prior to cervical dislocation under continuing anaesthesia. The experiments have complied with the relevant national guidelines of Belgium and institutional policies of GlaxoSmithKline Biologicals and all protocols were approved by GSK Biologicals ethics committee.
Antibody assays {#s2k}
---------------
The NhhA derived from strain H44/76 was expressed and purified from *E. coli* as C-terminally His tagged protein. ELISA plates were coated with NhhA. The assays were performed as described previously [@pone.0072003-Kortekaas1].
Complement-dependent bactericidal antibody assays {#s2l}
-------------------------------------------------
Bactericidal assays were performed as described previously [@pone.0072003-Weynants1]. Briefly, bacteria from an overnight on Mueller Hinton agar plates were inoculated in tryptic soy broth with iron chelator and grown in shaking flasks for 3 h at 37°C. The culture was diluted in order to reach an OD~600\\ nm~ of 0.4 (bacterial suspension). The sera were heat inactivated (40 min at 56°C) and subsequently diluted in PBS-glucose. In microtiter plates, diluted test serum was mixed with baby-rabbit complement and bacterial suspension. Serial dilutions of test sera were treated similarly. Controls included bacteria plus complement, bacteria plus heat-inactivated complement, and test serum plus bacteria plus heat-inactivated complement. The microtiter plates were sealed and incubated while shaking (520 rpm) for 75 min at 37°C without CO~2~. Agar was added to each well and after an overnight incubation at 33°C in 5% CO~2~, the colonies were counted. Bactericidal titers are defined as the reciprocal of the serum dilution yielding 50% killing. Additive or synergistic effects of antibodies directed against different over-expressed minor OMPs were studied by using sera pooled from mice immunized with each vaccine preparation. For mixing experiments, equal volumes of pooled sera of each relevant treatment group were combined and subsequently tested in the bactericidal assay.
Results {#s3}
=======
Expression Strains {#s3a}
------------------
In order to remove confounding factors in analysis of immune responses to NhhA, a number of phase-variable and/or immunodominant components of the meningococcus were fixed or abrogated: the starting strain was ¢3, an acapsulate, Opa-negative derivative of serogroup B strain MC58 [@pone.0072003-Virji1]. Phase variation of lacto-*N*-neotetraose in LOS was abolished by a making an unmarked frameshift mutation in the *lgtA* gene, resulting in a strain expressing an L8 immunotype LOS structure. Transformation with the NhhA overexpression constructs resulted in deletion of *porA*, and subsequent manipulation (as described in material and methods, and below) abolished Opc expression.
NhhA Overexpression constructs {#s3b}
------------------------------
Our previous observations suggested that NhhA is not strongly expressed *in vitro*, so plasmids were constructed containing alleles to express NhhA at high levels when introduced into the *N. meningitidis* chromosome. As *nhhA* exhibits some sequence variation, (mostly confined to the region encoding the predicted amino-terminal 200 amino acids of the predicted mature protein), alleles were also constructed with some or all of this region deleted (See [materials and methods](#s2){ref-type="sec"}).
The strategy of gene replacement was used: the recombinant *nhhA* allele was placed under the control of the strong *porA* promoter, using plasmid pCO14K. The plasmid pCO14K contains *porA* (promoter and coding region) with the selectable kanamycin resistance determinant downstream [@pone.0072003-vanderVoort1]. Alleles of *nhhA* were cloned into convenient restriction sites in pCO14K downstream of the *porA* promoter, replacing the majority of the *porA* gene ([Fig. 1](#pone-0072003-g001){ref-type="fig"}). Transformation of *N. meningitidis* with these plasmids results in reciprocal exchange by homologous recombination resulting in expression of the *nhhA* gene under the control of the *porA* promoter, and abrogation of *porA* expression.
For overexpression of the wild-type allele, *nhhA* was amplified from serogroup C strain PMC21 and cloned to create plasmid pIP52(PMC21). Plasmids with deletions of the most variable regions were also constructed. Plasmid pIP52(PMC21Bgl) contains a recombinant allele of *nhhA*, encoding amino acids 1--54 and 134--591 of NhhA~PMC21~, a truncation of 81 amino acids relative to wild-type. This amino acid sequence, NhhA~Bgl~, has the majority of the V1 region (most variable), all of the V2 and C2 regions, and part of the C3 region removed relative to the parental NhhA~PMC21~ protein ([Fig. 1](#pone-0072003-g001){ref-type="fig"}). Plasmid pIPSO1, constructed by a PCR strategy, encodes a further truncated allele, *nhhA~SO1~*, encoding amino acids 1--52 and 337--591 (*i.e.* lacking 285 amino acids relative to wild-type NhhA~PMC21~) with V1--V3 and C2--C4 deleted. ([Fig. 1](#pone-0072003-g001){ref-type="fig"}).
Transformation and Over-Expression analysis {#s3c}
-------------------------------------------
The plasmids pIP52(PMC21), pIP52(PMC21Bgl), and pSO1 were linearised and individually transformed into *N. meningitidis* strain 7G2 (acapsulate, Opa^−^ L8 immunotype fixed expression, see [table 1](#pone-0072003-t001){ref-type="table"}). In each case, approximately 20--30 kanamycin resistant clones were screened to find one clone in which the double cross-over integration event had occurred to include both the kanamycin resistance determinant and the recombinant *nhhA* allele, replacing *porA* and expressing NhhA at a higher level compared with the parental strain. To confirm overexpression of NhhA and replacement of *porA* in resulting kanamycin resistant clones, outer-membrane proteins were prepared and separated electrophoretically, prior to visualization of proteins by coomassie stain.
Deletion of opc {#s3d}
---------------
In order to further reduce confounding factors in our analysis of the vaccine potential of NhhA, we abolished Opc expression from our strains. Plasmids pIP14 and pOpcTet contain a deleted allele of *opc*, the latter with insertion of *tetM* resistance cassette (see methods). Opc is an outer membrane protein that is immunodominant, but is not present in all strains [@pone.0072003-Wiertz1]. Plasmid pIP14 or pOpcTet was linearised and transformed into strains over-expressing NhhA variants. Colonies were screened by immunoblot using Opc-specific polyclonal rabbit sera (results not shown). Colonies that did not express Opc were analysed by immunoblot, PCR, and Southern blot to confirm that the lack of expression was due to the introduced mutation and not to the inherent phase-variation of this gene (results not shown). Thus strains used for immunogenicity studies overexpressed NhhA variants, expressed no PorA or Opc, had LOS fixed in the L8 immunotype, and these characteristics were introduced into a previously defined acapsulate, Opa-negative strain [@pone.0072003-Virji1].
Analysis of high molecular weight NhhA species {#s3e}
----------------------------------------------
As NhhA is a surface expressed protein [@pone.0072003-Peak1], each of the overexpressed NhhA proteins were engineered to include the signal peptide (amino acids 1--51) predicted by SIGNALP (not shown). In each of the resulting strains, NhhA expression upregulated relative to the non-transformed parental strain, and PorA expression was abolished. In order to determine the level of recombinant NhhA overexpression relative to the endogenous NhhA, serial two-fold dilutions and Western immunoblot of sarkosyl insoluble proteins was conducted with detection by rabbit polyclonal anti-NhhA sera [@pone.0072003-Peak1]. Expression of NhhA is increased at least 25-fold ([Fig. 2](#pone-0072003-g002){ref-type="fig"}) for the full-length NhhA-overexpressing strain P6ΔOpcA. Reactivity with NhhA-specific sera is equivalent between 25 µg sarkosyl-insoluble protein of strain 7G2, and \\<1 µg protein of strain P6ΔOpcA.
{#pone-0072003-g002}
It is possible that the high molecular weight species expressed by NhhA overexpressing strains represent either stable multimers of NhhA ([Fig. 3](#pone-0072003-g003){ref-type="fig"}), or complexes of NhhA with other protein(s), or other proteins alone. To confirm the identity of the high molecular weight protein, and to confirm the cleavage of the predicted signal peptide, the overexpressed proteins were N-terminally sequenced. Amino acid sequence for the two truncated NhhA proteins indicated in each case that the high molecular weight complex is comprised essentially of NhhA. The N-terminal sequence XXETDLTSVGT was obtained for protein isolated from PΔ5ΔOpcA, which corresponds to predicted amino acids 52 to 62 of the truncated non-mature allele. The N-terminal sequence XNVXFVRTY was obtained for the PSO1.17A-derived protein, corresponding to amino acids 52--60. These data confirm the presence and cleavage site for the long signal peptide (aa 1--51), and that it is cleaved when expressed in *N. meningitidis*. The predicted molecular weight of the mature forms (*i.e.* after cleavage of the predicted signal peptide) of NhhA~PMC21~, NhhA~Bgl~, and NhhA~SO1~, is 56.6, 47.6, and 36.7 kDa respectively. We had previously noted that NhhA migrates on SDS-PAGE at a size equivalent to \\>250 kDa [@pone.0072003-Peak1] and NhhA has recently been described as a trimer when expressed in *E. coli* [@pone.0072003-Scarselli1]. Overexpression in *N. meningitidis* of NhhA~PMC21~, and of the truncated proteins NhhA~Bgl~, and NhhA~SO1~, results in migration at a size implying that NhhA exists as a stable mulitimeric form ([Fig. 2](#pone-0072003-g002){ref-type="fig"}) and that the truncations did not remove any structural features necessary for oligomerisation or stability.
{ref-type="table"} for strain and protein descriptions.](pone.0072003.g003){#pone-0072003-g003}
Immunogenicity {#s3f}
--------------
In order to assess the antibody response to the NhhA variants, outer membrane vesicles were prepared. The OMV preparations were adsorbed onto Al(OH)~3~ and injected into mice on days 0, 21 and 28. On day 42, the mice were bled and sera pooled from animals in each group. Each vaccine preparation elicited a similar antibody response, as assessed by ELISA using purified recombinant NhhA~H44/76~ ([Fig. 4](#pone-0072003-g004){ref-type="fig"}). Mice immunized with OMV produced from a NhhA null strain did not elicit anti-NhhA antibodies (data not shown).
{#pone-0072003-g004}
As it has previously been observed that over-expression of more than one minor protein is required for high-titer bactericidal activity [@pone.0072003-Weynants1], sera from mice inoculated with OMVs obtained from strains over-expressing the different NhhA variants (NhhA-OMVs) were mixed with sera from mice immunized with OMVs produced from a TbpA over-expressing strain (TbpA-OMVs) [@pone.0072003-Weynants1] and serum bactericidal assays were performed against H44/76 and CU385 ([Table 3](#pone-0072003-t003){ref-type="table"}).
10.1371/journal.pone.0072003.t003
###### Impact of deletion of variable regions of Nhha on the induction of complement-mediated killing by bactericidal antibodies in mice in synergy with anti-TbpA OMVs sera using pooled sera.
{#pone-0072003-t003-3}
Mix of sera[b](#nt102){ref-type="table-fn"} H44/76 SBA[a](#nt101){ref-type="table-fn"} CU385 SBA[a](#nt101){ref-type="table-fn"}
--------------------------------------------- -------------------------------------------- -------------------------------------------
a-TbpA OMVs+control negative pooled sera 778 532
a-TbpA OMVs+a- NhhA~PMC21~ OMVs 2595[a](#nt101){ref-type="table-fn"} 1438
a-TbpA OMVs+a- NhhA~Bgl~ OMVs 4383 2891
a-TbpA OMVs+a- NhhA~SO1~ OMVs 1568 742
SBA = Serum Bactericidal Assay, Geometric mean titers for 50% killing.
NhhA~PMC21~, wild type NhhA; NhhA~Bgl~, NhhA without variable regions 1&2; NhhA~SO1~, NhhA without variable regions 1 to 4.
The mix of pooled sera from mice immunized with TbpA-OMVs and NhhA~PMC21~-OMVs had higher bactericidal titers than mixed sera from control mice (immunized with adjuvant alone) and from mice immunized with TbpA-OMVs. These results are in line with results obtained previously showing additive or synergistic impact of antibodies against more than one minor component for bactericidal activity [@pone.0072003-Weynants1]. The highest bactericidal titers were measured on the mix of anti-TbpA-OMV sera and NhhA~Bgl~-OMVs (NhhA~Bgl~ lacking the variable regions V1 and V2). The enhanced bactericidal activity of NhhA~Bgl~ was seen against both the strains used in SBA. By comparison, the deletion of the four variable regions of NhhA (NhhA~SO1~) clearly had a negative impact on the induction of bactericidal antibodies as the bactericidal titers measured on the mixed anti-TbpA sera were lower than those obtained either with the wild type NhhA~PMC21~ or with NhhA~Bgl~.
Discussion {#s4}
==========
Many outer membrane proteins of *N. meningitidis* exhibit strain to strain sequence variation, presumably as a result of selection by the host immune system. This is one of the reasons no effective cross-protective vaccine is available. The most variable region of PorA, for example, are in the longest, surface exposed loops which are highly immunogenic, and in part responsible for the strain specificity of previous vaccine formulations. NhhA has a number of variable regions, but the longest and most variable region (V1) is confined to the amino-terminus of the mature protein ([@pone.0072003-Peak1], [Fig. 1](#pone-0072003-g001){ref-type="fig"}). It has been previously noted that sera from meningococcal disease patients recognize NhhA. Van Ulsen *et al.* [@pone.0072003-vanUlsen1] suggest the reactivity of patient sera implies that cross-reactive antibodies are elicited *in vivo* despite the infecting strain being different in each case [@pone.0072003-vanUlsen1]. Another study indicated that convalescent sera of meningococcal disease patients (age 0.2--4 yrs) also recognises NhhA (referred to as NMB0992, [@pone.0072003-Litt1]). However, whether the antibodies were bactericidal, and the allele of *nhhA* in the infecting strain was not investigated. Healthy carriers also had antibodies that recognized NhhA [@pone.0072003-vanUlsen1], which may be the result of asymptomatic colonisation by *N. meningitidis*. NhhA expression levels are variable between strains [@pone.0072003-Peak1], [@pone.0072003-EcheniqueRivera1], but the mechanisms for this variable expression are unclear.
We over-expressed the wild-type protein to assess immunogenicity and protective potential against the autologous strain. We also overexpressed two truncated alleles to assess whether the conserved regions could elicit a protective immune response. As a consequence of the method of over-expression, these strains were deficient for PorA production. In addition, we deleted the strongly immunogenic protein Opc, in a strain background that was acapsulate, Opa^−^, and phase variation of terminal LOS was abolished. The resulting phenotype, lacking several immundominant OMPs enabled assess the potential of NhhA for inclusion in a vaccine.
As NhhA is expressed at low levels *in vitro* in many strains, we placed NhhA under the control of the *porA* promoter resulting in strong expression of NhhA, as previously reported [@pone.0072003-Weynants1]. NhhA shares sequence similarity with other autotransporter proteins [@pone.0072003-Peak1], particularly the adhesins Hia and Hsf of *H. influenzae*. One of the features of autotransporters is a long signal peptide: we confirmed that the mature protein has a signal peptide that is cleaved after amino acid 51. We also observed that for each of the overexpressed NhhA proteins and truncations, NhhA migrated on SDS-PAGE at much greater than the predicted molecular weight. This demonstrated that removal of N-terminal domains did not affect trimerisation and stability. The stability and correct trimerisation of the recombinant NhhA described in this study is important, as expression of monomeric NhhA (due to mutation of the translocator domain) results in reduced susceptibility to NhhA-specific bactericidal killing [@pone.0072003-EcheniqueRivera1].
The autotransporters YadA (*Yersinia* sp.) and UspA1 (*Moraxella catarrhalis*) form a membrane anchored oligomeric "lollipop" [@pone.0072003-Hoiczyk1] and the C-terminal 76 amino acids of Hia form a trimer in the outer-membrane and are necessary and sufficient for export of the passenger domain [@pone.0072003-Surana1]. Surana *et al.* also showed that the C-terminal 119 amino acids of NhhA could act to surface localize the Hap passenger domain [@pone.0072003-Surana1]. More recently, Scarselli *et al.*, demonstrated that the transporter domain of NhhA could be further localized to the C-terminal 72 amino acids [@pone.0072003-Scarselli1].
By analogy with the structure of related autotransporters such as YadA [@pone.0072003-Koretke1], the N-terminal domain of the mature NhhA protein is predicted to form the tip region of a filamentous structure, and the most variable region of *nhhA* encodes this N-terminal domain. The deletion of variable regions (V1 to V2 or V1 to V4) did not reduce the immunogenicity of the truncated variants when presented in OMVs obtained from NhhA over-expressing strains, as assessed by ELISA to wild-type recombinant NhhA. This indicates that proteins of the OMVs are processed in such a way to elicit antibodies to epitopes other than the variable regions, and that these epitopes are accessible on recombinant *E. coli*-derived NhhA. Previous studies using another vaccine candidate, FrpB, also demonstrated that removal of immunogenic domains does not reduce production of antibodies recognizing wild-type FrpB when assessed by ELISA [@pone.0072003-Kortekaas2]. Crucially, in the FrpB study, the conserved cryptic epitopes were not accessible when the wild-type protein is expressed.
To assess whether antibodies raised against truncated NhhA could access the same epitope on wild-type proteins, bactericidal assays were performed. This was done by mixing anti-NhhA-OMV sera with anti-TbpA-OMV sera because it was previously observed that NhhA-OMVs alone elicited reduced or undetectable serum bactericidal antibody titers in mice [@pone.0072003-Weynants1]. When high levels of NhhA are expressed in a target strain, however, polyclonal NhhA-specific sera are bactericidal [@pone.0072003-EcheniqueRivera1]. The results presented here confirmed the previous observation that antibodies directed against different minor OMPs work either in addition or in synergy to mediate the complement killing of bacteria [@pone.0072003-Weynants1], as the bactericidal titers measured on the pooled sera from mice immunized with TbpA-OMVs were systematically lower than the titers observed for the different NhhA-TbpA mixed sera.
In contrast to results obtained for FrpB truncations, removal of variable regions 1 and 2 of NhhA did not reduce bactericidal activity induced by OMVs containing the truncated NhhA~Bgl~. Indeed, the bactericidal titer induced by NhhA~Bgl~-OMVs against both tested strains was higher than that elicited by NhhA~PMC21~-OMVs expressing wild-type, full length NhhA. NhhA~H44/76~ and NhhA~CU385~, despite being geographically distinct (from epidemics in Norway and Cuba respectively) contain NhhA that differed by only one amino acid located in the signal peptide. The differences in bactericidal titers against these two strains may be explained by differences in TbpA (amino acid sequence is 92% identical), in differences in other minor antigens within the OMVs, or in levels of capsular polysaccharide produced. Nevertheless, in both cases the highest titers were with the NhhA~Bgl~-OMVs, confirming that antibodies against this truncated protein can act synergistically with anti-TbpA to mediate complement activation and killing. As removal of the variable regions did not reduce functional antibody responses, it is tempting to speculate that this truncated variant would elicit a response against strains expressing NhhA whose sequence differs within the variable domain but this remains to be formally examined.
Further truncation of NhhA (NhhA~SO1~) resulted in a reduced bactericidal response against wild-type target strains, but not a reduction in immunogenicity assessed by ELISA. A similar effect has been reported when variable loops were selectively deleted from the Envelope protein of HIV: removal of variable loops resulted in an immune response capable of neutralizing strains containing variant loops whereas further truncations abolished neutralizing activity [@pone.0072003-Yang1]. The fact the deletion of V1 to V2 or V1 to V4 regions of NhhA in our study does not impact on the immunogenicity of the OMV vaccine (as observed in ELISA) but does affect the induction of bactericidal antibodies suggests either that some protective epitopes are localized in the C3 domains or that the larger truncation of the N terminal part of the protein impact on the conformation of the NhhA passenger domain. Another possibility is that removal of these variable domains of NhhA reveals another protein that is responsible for the synergistic protection, although this is considered less likely. It is also notable that the larger truncation of NhhA involves the first of two consensus sequences between N-CAMs and NhhA, domains which may be involved in the potential adhesin function of NhhA [@pone.0072003-Scarselli2]. NhhA also inhibits complement--mediated killing via vitronectin binding [@pone.0072003-Griffiths1], but which domains of the protein mediate this is currently not elucidated.
Taken together, our studies suggest that the strategy of removing the most variable region of this protein still elicits an effective antibody response. In this study, we demonstrated that an OMV vaccine incorporating NhhA with the most variable region deleted elicits an higher bactericidal antibody titer than one containing the the wild-type protein. Removing this region results in an antigen with potential to elicit responses effective against strains with variant NhhA sequence, thus truncated NhhA represents a promising candidate for inclusion in an OMV based vaccine formulation for prevention of meningococcal disease.
Lyle Carrington, Jayde Gawthorne and Shan Ku are thanked for technical assistance.
[^1]: **Competing Interests:**VEW, CF, and JTP are employees of GlaxoSmithKline. JTP was an employee of GSK at the at the time of the study. IRP and MPJ are named inventors on patents relating to this work and therefore have a potential financial interest. The current or former employment of authors, or potential interest through patent inventorships of authors does not alter our adherence to all the PLOS ONE policies on sharing data and materials.
[^2]: Conceived and designed the experiments: IRP MPJ JTP. Performed the experiments: IRP YNS VEW CF. Wrote the paper: IRP MPJ VEW.
[^3]: Current address: Department of Microbiology and Immunology, The University of Melbourne, Melbourne, Victoria, Australia
[^4]: Current address: Crucell, Leiden, Holland
|
F-Mart: Is It Too Soon To Judge?
In July 2005 the New York Mets signed 16 year old Fernando Martinez. F-Mart was given a signing bonus of $1.4 million and from that point on was labeled as the future of the New York Mets. Omar Minaya said of F-Mart when he signed that:
“What we saw in [Martinez] was a 16-year-old kid with power, great ability and great character, above everything else.”
F-Mart was considered to be a “five tool player” meaning that he could hit for average, power, excellent at running the bases, excellent speed, excellent fielding and an amazing throwing arm. For the last 4 years other clubs in the majors have wanted the young outfielder in trades but the Mets put F-Mart on untouchable status. There was no player nor any situation that would warrant a trade for the young outfielder.
At the age of 19 he was assigned to the Mets Double A team. There were some concerns however. Martinez has proven to be injury prone. In 2007 he missed part of the minor league season with an injured wrist which resulted in having surgery to remove his hamate bone. 2008 Fernando suffered another injury, this time he injured his hamstring twice that year. Early this year playing winter ball F-Mart injured his elbow, though no surgery was required, just some rest. Now in 2009 he is on the disabled list with a bad knee.
F-Mart started the year in AAA, playing for the Bisons. Before being called on May 26 he was batting .291 with 8 home-runs, and at the time he was leading the international league with 25 extra base hits in just 42 games. He knew that the eyes of the baseball world were on him when the Mets, having no choice due to the injuries that they have suffered over the first half of the season, called him up.
2009 so far has not been a great year for F-Mart as of yet. He has played so far 29 games. He has had 91 at bats and is currently hitting .176 with 14 strike outs, an on-base percentage of .242.
I think it would be fair to say that F-Mart has been unimpressive in his early career with the Mets. He has looked over-matched at the plate for most of his appearances. Recently, he struck out in twice in the same game by swinging at strike 3 that was thrown at his eyes. I think his work in the field still needs work as he does not always get a good read on the ball though he does recover quickly to make the out. Plus, I don’t know if the maturity is there yet either, to be a big leaguer. I hate to keep bringing it up, but let’s not forget the pop up that he did not run out that the opposing catcher dropped. Had F-Mart been running to first he would have been on base.
Personally I do think it’s too early to judge him. I’ve been saying for a couple of years now that the Mets have for some reason rushed him through the system. It seems that they wanted to justify as soon as possible that he was worthy of turning down trades for some big names that have been offered to the Mets. This season with all the injuries the team has suffered and the lack of depth in AAA, all eyes were on him more than they would have been for him to perform well. He has lost time developing due to the injuries he has suffered over his short career so far, which is something to also keep in mind. If you remember, this year the Mets originally did not plan to bring him up this early. They were talking after the All-Star game at the earliest, not the end of May.
The Mets have a habit of rushing their top prospects through the system, I think that is what’s happening to F-Mart. He should have been down at AAA playing for the Bisons this season. I know seeing him strikeout is frustrating considering how long he has been touted as the future of this team, but we have to remember that he’s just 20 years old, has not had a lot of minor league experience, and is still developing. I know some are calling him a bust, but it’s too early to make that judgment right right now. I do think F-Mart can be a good player at the major league level, but it will take time and patience. |
{
"matches": "absolute-replaced-height-001-ref.xht",
"assert": "If the height, top, bottom and vertical margins of an absolute positioned inline replaced element are all 'auto', then its use value is determined for inline replaced element, its 'top' is given by its static position and both 'margin-top' and 'margin-bottom' used values are '0'. In this test, the 'height' and 'width' of the inline replaced element are 'auto' and the element also has an intrinsic height, so the intrinsic height and the intrinsic width become the used values.",
"help": [
"http://www.w3.org/TR/CSS21/visudet.html#abs-replaced-height"
],
"test_case": {
"tag": "body",
"children": [
{
"tag": "div",
"children": [
{
"tag": "img",
"style": {
"margin_bottom": "auto",
"position": "absolute",
"margin_top": "auto"
}
}
],
"style": {
"display": "block",
"width": "1in",
"border_top_style": "solid",
"border_bottom_color": "orange",
"height": "15px",
"border_top_width": "initial",
"border_top_color": "orange",
"border_bottom_style": "solid",
"unicode_bidi": "embed",
"border_bottom_width": "initial"
}
}
],
"style": {
"margin_bottom": "8px",
"margin_top": "8px",
"margin_right": "8px",
"margin_left": "8px",
"unicode_bidi": "embed",
"display": "block"
}
}
} |
## make-library-glue.pkg
#
# Build much of the code required
# to make a C library like Gtk or OpenGL
# available at the Mythryl level, driven
# by an xxx-construction.plan file.
#
# The format of xxx-construction.plan files
# is documented in Note[1] at bottom of file.
#
# make-library-glue.pkg really shouldn't be in
# standard.lib because it is not of general interest,
# but at the moment that is the path of least
# resistance. -- 2013-01-12 CrT
# Compiled by:
# src/lib/std/standard.lib
stipulate
package paf = patchfile; # patchfile is from src/lib/make-library-glue/patchfile.pkg
package pfj = planfile_junk; # planfile_junk is from src/lib/make-library-glue/planfile-junk.pkg
package pfs = patchfiles; # patchfiles is from src/lib/make-library-glue/patchfiles.pkg
package plf = planfile; # planfile is from src/lib/make-library-glue/planfile.pkg
package sm = string_map; # string_map is from src/lib/src/string-map.pkg
#
Pfs = pfs::Patchfiles;
herein
api Make_Library_Glue
{
Field = { fieldname: String,
filename: String,
lines: List(String), # Not exported.
line_1: Int,
line_n: Int,
used: Ref(Bool)
};
Fields = sm::Map( Field );
State;
#
Paths = { construction_plan : String, # E.g. "src/opt/gtk/etc/gtk-construction.plan"
lib_name : String, # E.g. "opengl" -- Must match the #define CLIB_NAME "opengl" line in .../src/opt/xxx/c/in-main/libmythryl-xxx.c
# # Files which will be patched:
xxx_client_api : String, # E.g. "src/opt/gtk/src/gtk-client.api"
xxx_client_g_pkg : String, # E.g. "src/opt/gtk/src/gtk-client-g.pkg"
xxx_client_driver_api : String, # E.g. "src/opt/gtk/src/gtk-client-driver.api"
xxx_client_driver_for_library_in_c_subprocess_pkg : String, # E.g. "src/opt/gtk/src/gtk-client-driver-for-library-in-c-subprocess.pkg"
xxx_client_driver_for_library_in_main_process_pkg : String, # E.g, "src/opt/gtk/src/gtk-client-driver-for-library-in-main-process.pkg"
mythryl_xxx_library_in_c_subprocess_c : String, # E.g. "src/opt/gtk/c/in-sub/mythryl-gtk-library-in-c-subprocess.c"
libmythryl_xxx_c : String, # E.g. "src/opt/gtk/c/in-main/libmythryl-gtk.c"
section_libref_xxx_tex : String # E.g., "doc/tex/section-libref-gtk.tex";
};
Builder_Stuff
=
{
path: Paths,
#
maybe_get_field: (Fields, String) -> Null_Or(String),
get_field: (Fields, String) -> String,
get_field_location: (Fields, String) -> String,
#
build_table_entry_for_'libmythryl_xxx_c': Pfs -> (String, String) -> Pfs,
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c': Pfs -> String -> Pfs,
#
build_fun_declaration_for_'xxx_client_driver_api': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
#
build_fun_declaration_for_'xxx_client_api': Pfs -> { fn_name: String, fn_type: String, api_doc: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg': Pfs -> { fn_name: String, c_fn_name: String, fn_type: String, libcall: String, result_type: String } -> Pfs,
to_xxx_client_driver_api: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_c_subprocess_pkg: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_main_process_pkg: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_funs: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_types: Pfs -> String -> Pfs,
to_xxx_client_api_funs: Pfs -> String -> Pfs,
to_xxx_client_api_types: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_trie: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_table: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_apitable: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_libtable: Pfs -> String -> Pfs,
custom_fns_codebuilt_for_'libmythryl_xxx_c': Ref(Int),
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c': Ref(Int),
callback_fns_handbuilt_for_'xxx_client_g_pkg': Ref(Int),
note__section_libref_xxx_tex__entry
:
Pfs
->
{ fields: Fields,
fn_name: String, # E.g. "make_window"
fn_type: String, # E.g. "Session -> String"
url: String, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
libcall: String # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
}
->
Pfs
};
Custom_Body_Stuff = { fn_name: String, libcall: String, libcall_more: String, to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs, path: Paths };
Custom_Body_Stuff2 = { fn_name: String, libcall: String, libcall_more: String, to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs, path: Paths };
Plugin = LIBCALL_TO_ARGS_FN (String -> List(String))
#
| BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (String, (String, Int, String) -> String)
| BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (String, (String, Int, String) -> String)
#
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (String, Pfs -> Custom_Body_Stuff -> Pfs)
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (String, Pfs -> Custom_Body_Stuff2 -> Pfs)
#
| FIGURE_FUNCTION_RESULT_TYPE (String, String -> String)
#
| DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (String, String)
| DO_COMMAND_TO_STRING_FN (String, String)
#
| CLIENT_DRIVER_ARG_TYPE (String, String)
| CLIENT_DRIVER_RESULT_TYPE (String, String)
;
make_library_glue: Paths -> List(plf::Paragraph_Definition(Builder_Stuff)) -> List(Plugin) -> Void;
};
end;
stipulate
package fil = file__premicrothread; # file__premicrothread is from src/lib/std/src/posix/file--premicrothread.pkg
package lms = list_mergesort; # list_mergesort is from src/lib/src/list-mergesort.pkg
package iow = io_wait_hostthread; # io_wait_hostthread is from src/lib/std/src/hostthread/io-wait-hostthread.pkg
package paf = patchfile; # patchfile is from src/lib/make-library-glue/patchfile.pkg
package pfj = planfile_junk; # planfile_junk is from src/lib/make-library-glue/planfile-junk.pkg
package pfs = patchfiles; # patchfiles is from src/lib/make-library-glue/patchfiles.pkg
package plf = planfile; # planfile is from src/lib/make-library-glue/planfile.pkg
package psx = posixlib; # posixlib is from src/lib/std/src/psx/posixlib.pkg
package sm = string_map; # string_map is from src/lib/src/string-map.pkg
#
Pfs = pfs::Patchfiles;
#
exit_x = winix__premicrothread::process::exit_x;
=~ = regex::(=~);
sort = lms::sort_list;
chomp = string::chomp;
tolower = string::to_lower;
uniquesort = lms::sort_list_and_drop_duplicates;
fun isfile filename
=
psx::stat::is_file (psx::stat filename) except _ = FALSE;
#
fun die_x message
=
{ print message;
exit_x 1;
};
# The following are all duplicates of definitions in
# src/app/makelib/main/makelib-g.pkg
# -- possibly a better place should be found
# for them:
# Convert src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
# to mythryl-xxx-library-in-c-subprocess.c
# and such:
#
fun basename filename
=
case (regex::find_first_match_to_ith_group 1 .|/([^/]+)$| filename)
THE x => x;
NULL => filename;
esac;
# Convert src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
# to src/opt/xxx/c/in-sub
# and such:
#
fun dirname filename
=
case (regex::find_first_match_to_ith_group 1 .|^(.*)/[^/]+$| filename)
THE x => x;
NULL => "."; # This follows linux dirname(1), and also produces sensible results.
esac;
# Drop leading and trailing
# whitespace from a string.
#
fun trim string
=
{ if (string =~ ./^\\s*$/)
#
"";
else
# Drop trailing whitespace:
#
string = case (regex::find_first_match_to_ith_group 1 ./^(.*\\S)\\s*$/ string)
THE x => x;
NULL => string;
esac;
# Drop leading whitespace:
#
string = case (regex::find_first_match_to_ith_group 1 ./^\\s*(\\S.*)$/ string)
THE x => x;
NULL => string;
esac;
string;
fi;
};
fun print_strings [] => printf "[]\\n";
print_strings [ s ] => printf "[ \\"%s\\" ]\\n" s;
print_strings (s ! rest)
=>
{ printf "[ \\"%s\\"" s;
apply (\\\\ s = printf ", \\"%s\\"" s) rest;
printf "]\\n";
};
end;
herein
# This package is invoked in:
#
# src/opt/gtk/sh/make-gtk-glue
# src/opt/opengl/sh/make-opengl-glue
package make_library_glue:
Make_Library_Glue
{
# Field is a contiguous sequence of lines
# all with the same linetype field:
#
# foo: this
# foo: that
#
# Most fields will be single-line, but this format
# supports conveniently including blocks of code,
# such as complete function definitions.
#
# We treat a field as a single string containing
# embedded newlines, stripped of the linetype field
# and the colon.
#
Field = { fieldname: String,
filename: String,
lines: List(String), # Not exported.
line_1: Int,
line_n: Int,
used: Ref(Bool)
};
Fields = sm::Map( Field );
State = { line_number: Ref(Int), # Exported as an opaque type.
fd: fil::Input_Stream,
fields: Ref( sm::Map( Field ))
};
Paths = { construction_plan : String,
lib_name : String, # E.g. "xxx". Must match the #define CLIB_NAME "xxx" line in src/opt/xxx/c/in-main/libmythryl-xxx.c
#
xxx_client_api : String,
xxx_client_g_pkg : String,
xxx_client_driver_api : String,
xxx_client_driver_for_library_in_c_subprocess_pkg : String,
xxx_client_driver_for_library_in_main_process_pkg : String,
mythryl_xxx_library_in_c_subprocess_c : String,
libmythryl_xxx_c : String,
section_libref_xxx_tex : String
};
Builder_Stuff
=
{
path: Paths,
#
maybe_get_field: (Fields, String) -> Null_Or(String),
get_field: (Fields, String) -> String,
get_field_location: (Fields, String) -> String,
#
build_table_entry_for_'libmythryl_xxx_c': Pfs -> (String, String) -> Pfs,
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c': Pfs -> String -> Pfs,
#
build_fun_declaration_for_'xxx_client_driver_api': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
#
build_fun_declaration_for_'xxx_client_api': Pfs -> { fn_name: String, fn_type: String, api_doc: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg': Pfs -> { fn_name: String, c_fn_name: String, fn_type: String, libcall: String, result_type: String } -> Pfs,
to_xxx_client_driver_api: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_c_subprocess_pkg: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_main_process_pkg: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_funs: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_types: Pfs -> String -> Pfs,
to_xxx_client_api_funs: Pfs -> String -> Pfs,
to_xxx_client_api_types: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_trie: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_table: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_apitable: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_libtable: Pfs -> String -> Pfs,
custom_fns_codebuilt_for_'libmythryl_xxx_c': Ref(Int),
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c': Ref(Int),
callback_fns_handbuilt_for_'xxx_client_g_pkg': Ref(Int),
note__section_libref_xxx_tex__entry
:
Pfs
->
{ fields: Fields,
fn_name: String, # E.g. "make_window"
fn_type: String, # E.g. "Session -> String"
url: String, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
libcall: String # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
}
->
Pfs
};
Custom_Body_Stuff = { fn_name: String, libcall: String, libcall_more: String, to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs, path: Paths };
Custom_Body_Stuff2 = { fn_name: String, libcall: String, libcall_more: String, to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs, path: Paths };
Plugin = LIBCALL_TO_ARGS_FN (String -> List(String))
#
| BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (String, (String, Int, String) -> String)
| BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (String, (String, Int, String) -> String)
#
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (String, Pfs -> Custom_Body_Stuff -> Pfs)
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (String, Pfs -> Custom_Body_Stuff2 -> Pfs)
#
| FIGURE_FUNCTION_RESULT_TYPE (String, String -> String)
#
| DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (String, String)
| DO_COMMAND_TO_STRING_FN (String, String)
#
| CLIENT_DRIVER_ARG_TYPE (String, String)
| CLIENT_DRIVER_RESULT_TYPE (String, String)
;
#
fun make_library_glue (path: Paths) (paragraph_definitions: List(plf::Paragraph_Definition(Builder_Stuff))) (plugins: List(Plugin))
=
{
note_plugins plugins;
plan = plf::read_planfile paragraph_defs path.construction_plan;
pfs = plf::map_patchfiles_per_plan builder_stuff pfs plan;
pfs = write_section_libref_xxx_tex_table pfs (.fn_name, .libcall, to_section_libref_xxx_tex_apitable);
pfs = write_section_libref_xxx_tex_table pfs (.libcall, .fn_name, to_section_libref_xxx_tex_libtable);
printf "\\n";
printf "%4d plain functions codebuilt for %s\\n" *plain_fns_codebuilt_for_'libmythryl_xxx_c' (basename path.libmythryl_xxx_c);
printf "%4d custom functions codebuilt for %s\\n" *custom_fns_codebuilt_for_'libmythryl_xxx_c' (basename path.libmythryl_xxx_c);
printf "%4d plain functions codebuilt for %s\\n" *plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' (basename path.mythryl_xxx_library_in_c_subprocess_c);
printf "%4d custom functions codebuilt for %s\\n" *custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' (basename path.mythryl_xxx_library_in_c_subprocess_c);
printf "%4d plain functions codebuilt for %s\\n" *plain_fns_codebuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d plain functions handbuilt for %s\\n" *plain_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d callback functions codebuilt for %s\\n" *callback_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d callback functions handbuilt for %s\\n" *callback_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
narration = pfs::write_patchfiles pfs; # Narration lines generated via sprintf "Successfully patched %4d lines in %s\\n" *patch_lines_written filename; in src/lib/make-library-glue/patchfile.pkg
printf "\\n";
apply {. printf "%s\\n" #msg; } narration;
printf "\\n";
}
where
# First, establish patch ids for our patchpoints:
#
patch_id_'functions'_in_'xxx_client_driver_api' = { patchname => "functions", filename => path.xxx_client_driver_api };
#
patch_id_'body'_in_'xxx_client_driver_for_library_in_main_process_pkg' = { patchname => "body", filename => path.xxx_client_driver_for_library_in_main_process_pkg };
#
patch_id_'body'_in_'xxx_client_driver_for_library_in_c_subprocess_pkg' = { patchname => "body", filename => path.xxx_client_driver_for_library_in_c_subprocess_pkg };
#
patch_id_'functions'_in_'xxx_client_api' = { patchname => "functions", filename => path.xxx_client_api };
patch_id_'types'_in_'xxx_client_api' = { patchname => "types", filename => path.xxx_client_api };
#
patch_id_'functions'_in_'xxx_client_g_pkg' = { patchname => "functions", filename => path.xxx_client_g_pkg };
patch_id_'types'_in_'xxx_client_g_pkg' = { patchname => "types", filename => path.xxx_client_g_pkg };
#
patch_id_'functions'_in_'mythryl_xxx_library_in_c_subprocess_c' = { patchname => "functions", filename => path.mythryl_xxx_library_in_c_subprocess_c };
patch_id_'table'_in_'mythryl_xxx_library_in_c_subprocess_c' = { patchname => "table", filename => path.mythryl_xxx_library_in_c_subprocess_c };
#
patch_id_'functions'_in_'libmythryl_xxx_c' = { patchname => "functions", filename => path.libmythryl_xxx_c };
patch_id_'table'_in_'libmythryl_xxx_c' = { patchname => "table", filename => path.libmythryl_xxx_c };
#
patch_id_'api_calls'_in_'section_libref_xxx_tex' = { patchname => "api_calls", filename => path.section_libref_xxx_tex };
patch_id_'binding_calls'_in_'section_libref_xxx_tex' = { patchname => "binding_calls", filename => path.section_libref_xxx_tex };
# Next, load into memory all the files which we will be patching:
#
pfs = (pfs::load_patchfiles
[
path.xxx_client_driver_api,
path.xxx_client_driver_for_library_in_c_subprocess_pkg,
path.xxx_client_driver_for_library_in_main_process_pkg,
path.xxx_client_g_pkg,
path.xxx_client_api,
path.mythryl_xxx_library_in_c_subprocess_c,
path.libmythryl_xxx_c,
path.section_libref_xxx_tex
]
);
# Clear out the current contents of all patches,
# to make way for the new versions we are about
# to create:
#
pfs = pfs::empty_all_patches pfs;
# Initialize all of our state:
plain_fns_codebuilt_for_'libmythryl_xxx_c' = REF 0;
custom_fns_codebuilt_for_'libmythryl_xxx_c' = REF 0;
plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' = REF 0;
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' = REF 0;
plain_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
plain_fns_codebuilt_for_'xxx_client_g_pkg' = REF 0;
# XXX SUCKO FIXME was one of thes supposed to be 'codebuilt'?
callback_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
callback_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c' = REF (sm::empty: sm::Map( Pfs -> Custom_Body_Stuff -> Pfs ));
nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c' = REF (sm::empty: sm::Map( Pfs -> Custom_Body_Stuff2 -> Pfs ));
#
arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c' = REF (sm::empty: sm::Map( (String,Int,String) -> String ));
arg_load_fns_for_'libmythryl_xxx_c' = REF (sm::empty: sm::Map( (String,Int,String) -> String ));
#
figure_function_result_type_fns = REF (sm::empty: sm::Map( String -> String ));
#
do_command_for = REF (sm::empty: sm::Map( String ));
do_command_to_string_fn = REF (sm::empty: sm::Map( String ));
#
client_driver_arg_type = REF (sm::empty: sm::Map( String ));
client_driver_result_type = REF (sm::empty: sm::Map( String ));
#
fun libcall_to_args_fn libcall
=
# 'libcall' is from a line in (say) src/opt/gtk/etc/gtk-construction.plan
# looking something like libcall: gtk_table_set_row_spacing( GTK_TABLE(/*table*/w0), /*row*/i1, /*spacing*/i2)
#
# 'libcall' contains embedded arguments like 'w0', 'i1', 'f2', 'b3', 's4'.
# They are what we are interested in here;
# our job is to return a sorted, duplicate-free list of them.
#
# The implementation here is generic; glue for a particular library
# may override it to support additional argument types (like 'w').
# See for example libcall_to_args_fn() in src/opt/gtk/sh/make-gtk-glue
#
# The argument letter gives us the argument type:
#
# i == int
# f == double (Mythryl "Float")
# b == bool
# s == string
#
# The argument digit gives us the argument order:
#
# 0 == first arg
# 1 == second arg
# ...
#
# Get list of above args, sorting by trailing digit
# and dropping duplicates:
#
{ raw_list = regex::find_all_matches_to_regex ./\\b[bfis][0-9]\\b/ libcall;
#
cooked_list = uniquesort compare_fn raw_list;
cooked_list;
}
where
fun compare_fn (xn, yn) # Compare "s0" and "b1" as "0" and "1":
=
{ xn' = string::extract (xn, 1, NULL);
yn' = string::extract (yn, 1, NULL);
string::compare (xn', yn');
};
end;
ref_libcall_to_args_fn = REF libcall_to_args_fn;
#
fun libcall_to_args libcall
=
*ref_libcall_to_args_fn libcall;
# Convenience functions to append
# lines (strings) to our patchpoints:
#
fun to_xxx_client_driver_api pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_driver_api' };
fun to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'body'_in_'xxx_client_driver_for_library_in_c_subprocess_pkg' };
fun to_xxx_client_driver_for_library_in_main_process_pkg pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'body'_in_'xxx_client_driver_for_library_in_main_process_pkg' };
fun to_xxx_client_g_pkg_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_g_pkg' };
fun to_xxx_client_g_pkg_types pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'types'_in_'xxx_client_g_pkg' };
fun to_xxx_client_api_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_api' };
fun to_xxx_client_api_types pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'types'_in_'xxx_client_api' };
fun to_mythryl_xxx_library_in_c_subprocess_c_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'mythryl_xxx_library_in_c_subprocess_c' };
fun to_mythryl_xxx_library_in_c_subprocess_c_trie pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'table'_in_'mythryl_xxx_library_in_c_subprocess_c' };
fun to_libmythryl_xxx_c_table pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'table'_in_'libmythryl_xxx_c' };
fun to_libmythryl_xxx_c_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'libmythryl_xxx_c' };
fun to_section_libref_xxx_tex_apitable pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'api_calls'_in_'section_libref_xxx_tex' };
fun to_section_libref_xxx_tex_libtable pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'binding_calls'_in_'section_libref_xxx_tex' };
# Save and index resources supplied by client:
#
fun note_plugins plugins
=
apply note_plugin plugins
where
fun note_plugin (LIBCALL_TO_ARGS_FN libcall_to_args_fn)
=>
ref_libcall_to_args_fn := libcall_to_args_fn;
note_plugin (BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (arg_type, arg_load_builder))
=>
arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c' := sm::set (*arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c', arg_type, arg_load_builder);
note_plugin (BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (arg_type, arg_load_builder))
=>
arg_load_fns_for_'libmythryl_xxx_c' := sm::set (*arg_load_fns_for_'libmythryl_xxx_c', arg_type, arg_load_builder);
note_plugin (HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (result_type, function))
=>
nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c' := sm::set (*nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c', result_type, function);
note_plugin (HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (result_type, function))
=>
nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c' := sm::set (*nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c', result_type, function);
note_plugin (FIGURE_FUNCTION_RESULT_TYPE (type, function))
=>
figure_function_result_type_fns := sm::set (*figure_function_result_type_fns, type, function);
note_plugin (DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (type, function))
=>
do_command_for := sm::set (*do_command_for, type, function);
note_plugin (DO_COMMAND_TO_STRING_FN (type, function))
=>
do_command_to_string_fn := sm::set (*do_command_to_string_fn, type, function);
note_plugin (CLIENT_DRIVER_ARG_TYPE (type, type2))
=>
client_driver_arg_type := sm::set (*client_driver_arg_type, type, type2);
note_plugin (CLIENT_DRIVER_RESULT_TYPE (type, type2))
=>
client_driver_result_type := sm::set (*client_driver_result_type, type, type2);
end;
end;
#
fun field_location (field: Field)
=
field.line_1 == field.line_n ?? sprintf "line %d" field.line_1
:: sprintf "lines %d-%d" field.line_1 field.line_n;
#
fun maybe_get_field (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE; THE (string::cat field.lines); };
NULL => NULL;
esac;
#
fun get_field (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE;
string::cat field.lines;
};
NULL => die_x (sprintf "Required field %s missing\\n" field_name);
esac;
#
fun get_field_location (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE; field_location field; };
#
NULL => die_x (sprintf "Required field %s missing\\n" field_name);
esac;
#
fun clear_state (state: State)
=
{ foreach (sm::keyvals_list *state.fields) {.
#
#pair -> (field_name, field);
if (not *field.used)
#
die_x(sprintf "Field %s at %s unsupported.\\n"
field_name
(field_location field)
);
fi;
};
state.fields := (sm::empty: sm::Map( Field ));
};
# Count number of arguments.
# We need this for check_argc():
#
fun count_args libcall
=
list::length (libcall_to_args libcall);
#
fun get_nth_arg_type (n, libcall)
=
{ arg_list = libcall_to_args libcall;
if (n < 0
or n >= list::length arg_list
)
raise exception DIE (sprintf "get_nth_arg_type: No %d-th arg in '%s'!" n libcall);
fi;
arg = list::nth (arg_list, n); # Fetch "w0" or "i0" or such.
string::extract (arg, 0, THE 1); # Convert "w0" to "w" or "i0" to "i" etc.
};
#
fun arg_types_are_all_unique libcall
=
{ # Get the list of parameters,
# something like [ "w0", "i1", "i2" ]:
#
args = libcall_to_args libcall;
# Turn parameter list into type list,
# something like [ 'w', 'i', 'i' ]:
#
types = map {. string::get_byte_as_char (#string,0); } args;
# Eliminate duplicate types from above:
#
types = uniquesort char::compare types;
# If 'args' is same length as 'types' then
# all types are unique:
#
list::length args == list::length types;
};
#
fun xxx_client_driver_api_type (libcall, result_type)
=
{ input_type = REF "(Session";
#
arg_count = count_args libcall;
for (a = 0; a < arg_count; ++a) {
#
t = get_nth_arg_type( a, libcall );
case t
"b" => input_type := *input_type + ", Bool";
"i" => input_type := *input_type + ", Int";
"f" => input_type := *input_type + ", Float";
"s" => input_type := *input_type + ", String";
#
x => case (sm::get (*client_driver_arg_type, x))
#
THE type2 => input_type := *input_type + ", " + type2; # Handle "w" etc
NULL => raise exception DIE (sprintf "Unsupported arg type '%s'" t);
esac;
esac;
};
input_type := *input_type + ")";
output_type
=
case result_type
#
"Bool" => "Bool";
"Float" => "Float";
"Int" => "Int";
"Void" => "Void";
#
x => case (sm::get (*client_driver_result_type, x))
#
THE type2 => type2; # "Widget", "new Widget"
#
NULL => { printf "Supported result types:\\n";
print_strings (sm::keys_list *client_driver_result_type);
raise exception DIE ("xxx_client_driver_api_type: Unsupported result type: " + result_type);
};
esac;
esac;
(*input_type, output_type);
};
#
stipulate
#
line_count = REF 2;
herein
#
fun build_fun_declaration_for_'xxx_client_driver_api' (pfs: Pfs) { c_fn_name, libcall, result_type }
=
{
# Add a blank line every three declarations:
#
line_count := *line_count + 1;
#
pfs = if ((*line_count % 3) == 0)
#
to_xxx_client_driver_api pfs "\\n";
else
pfs;
fi;
pfs = to_xxx_client_driver_api pfs (sprintf " %-40s" (c_fn_name + ":"));
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
pfs = to_xxx_client_driver_api pfs (sprintf "%-40s -> %s;\\n" input_type output_type);
pfs;
};
end;
#
fun write_do_command (pfs: Pfs) (do_command, fn_name, libcall, result_prefix, result_expression)
=
{
pfs = if (result_expression != "")
to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" { result = " + do_command + " (session");
else to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" " + do_command + " (session");
fi;
pfs = if (result_prefix != "") to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (.', "' + result_prefix + .'"');
else pfs;
fi;
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (.', "' + fn_name + .'"');
prefix = .' + " " +';
arg_count = count_args libcall;
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
#
t = get_nth_arg_type( a, libcall );
pfs = case t
"b" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s bool_to_string %s%d" prefix t a);
"f" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s eight_byte_float::to_string %s%d" prefix t a);
"i" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s int::to_string %s%d" prefix t a);
"s" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s string_to_string %s%d" prefix t a);
#
x => case (sm::get (*do_command_to_string_fn, x))
#
THE to_string => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s %s %s%d" prefix to_string t a);
#
NULL => raise exception DIE ("Unsupported arg type '" + x + "'");
esac;
esac;
};
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs ");\\n";
pfs = if (result_expression != "")
#
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs "\\n";
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" " + result_expression + "\\n");
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs " };\\n\\n\\n";
pfs;
else
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs "\\n\\n";
pfs;
fi;
pfs;
};
# Build a function for .../src/opt/xxx/src/xxx-client-driver-for-library-in-c-subprocess.pkg
# looking like
#
# fun make_status_bar_context_id (session, w0, s1) # Int
# =
# do_int_command (session, "make_status_bar_context_id", "make_status_bar_context_id" + " " + widget_to_string w0 + " " + string_to_string s1);
#
fun build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg' (pfs: Pfs) { c_fn_name, libcall, result_type }
=
{ pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" fun " + c_fn_name + " (session");
#
arg_count = count_args( libcall );
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
#
arg_type = get_nth_arg_type( a, libcall );
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf ", %s%d" arg_type a);
pfs;
};
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (")\\t# " + result_type + "\\n");
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" =\\n");
pfs = if (result_type == "Int") write_do_command pfs ("do_int_command", c_fn_name, libcall, c_fn_name, "");
elif (result_type == "Bool") write_do_command pfs ("do_string_command", c_fn_name, libcall, c_fn_name, "the (int::from_string result) != 0;");
elif (result_type == "Float") write_do_command pfs ("do_string_command", c_fn_name, libcall, c_fn_name, "the (eight_byte_float::from_string result);");
elif (result_type == "Void") write_do_command pfs ("do_void_command", c_fn_name, libcall, "", "");
else
case (sm::get (*do_command_for, result_type))
#
THE do_command => write_do_command pfs (do_command, c_fn_name, libcall, c_fn_name, "");
#
NULL => raise exception DIE ("Unsupported result type: " + result_type);
esac;
fi;
pfs;
};
#
fun n_blanks n
=
n_blanks' (n, "")
where
fun n_blanks' (0, string) => string;
n_blanks' (i, string) => n_blanks' (i - 1, " " + string);
end;
end;
# Build a function for .../src/opt/xxx/src/xxx-client-driver-for-library-in-main-process.pkg
# looking like
#
# NEED TO WORK OUT APPROPRIATE VARIATION FOR THIS
#
# fun make_status_bar_context_id (session, w0, s1) # Int
# =
# do_int_command (session, "make_status_bar_context_id", "make_status_bar_context_id" + " " + widget_to_string w0 + " " + string_to_string s1);
#
fun build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg' (pfs: Pfs) { fn_name, c_fn_name, fn_type, libcall, result_type }
=
{
# Construct xxx-client-driver-for-library-in-main-process.pkg level type for this function.
# The xxx-client-g.pkg level type may involve records or tuples,
# but at this level we always have tuples:
#
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs "\\n";
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
(sprintf " # %-80s # %s type\\n"
( (n_blanks (string::length_in_bytes fn_name))
+ (fn_type =~ ./^\\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
+ fn_type
)
(basename path.xxx_client_api)
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
(sprintf " my %s: %s%s -> %s\\n"
c_fn_name
(input_type =~ ./^\\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
input_type
output_type
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs " =\\n";
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
#
(sprintf " ci::find_c_function { lib_name => \\"%s\\", fun_name => \\"%s\\" };\\n"
path.lib_name
c_fn_name
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs "\\n";
pfs;
};
# Convert .|xxx_foo| to .|xxx\\_foo|
# to protect it from TeX's ire:
#
fun slash_underlines string
=
regex::replace_all ./_/ .|\\_| string;
# Write a trie line into file src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
#
fun build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) name
=
{
to_mythryl_xxx_library_in_c_subprocess_c_trie pfs
#
(sprintf
" set_trie( trie, %-46s%-46s);\\n"
(.'"' + name + .'",')
("do__" + name));
};
# Write a line like
#
# CFUNC("init","init", do__gtk_init, "Void -> Void")
#
# into file src/opt/xxx/c/in-main/libmythryl-xxx.c
#
fun build_table_entry_for_'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type)
=
{ to_libmythryl_xxx_c_table pfs
#
(sprintf "CFUNC(%-44s%-44s%-54s%s%s)\\n"
("\\"" + fn_name + "\\",")
("\\"" + fn_name + "\\",")
("do__" + fn_name + ",")
(fn_type =~ ./^\\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
("\\"" + fn_type + "\\"")
);
};
Doc_Entry
=
{ fn_name: String,
libcall: String,
url: String,
fn_type: String
};
doc_entries = REF ([]: List( Doc_Entry ));
# Note a tex documentation table
# line for file section-libref-xxx.tex.
#
fun note__section_libref_xxx_tex__entry
#
(pfs: Pfs) # We don't actually use this at present, but this regularizes the code, and a future version might use it.
#
{ fields: Fields,
fn_name, # E.g. "make_window"
libcall, # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
url, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
fn_type # E.g. "Session -> Widget"
}
=
{
# Get name of the C Gtk function/var
# wrapped by this Mythryl function:
#
libcall
=
case (maybe_get_field(fields,"doc-fn"))
#
THE field => field; # doc-fn is a manual override used when libcall is unusable for documentation.
NULL =>
{ # libcall is something like gtk_widget_set_size_request( GTK_WIDGET(/*widget*/w0), /*wide*/i1, /*high*/i2)
# but all we want here is the
# initial function name:
#
libcall = case (regex::find_first_match_to_regex ./[A-Za-z0-9_']+/ libcall)
THE x => x;
NULL => "";
esac;
# If libcall does not begin with [Gg], it
# is probably not useful in this context:
#
libcall = (libcall =~ ./^[Gg]/) ?? libcall
:: "";
libcall;
};
esac;
fn_name = slash_underlines fn_name;
libcall = slash_underlines libcall;
url = slash_underlines url; # Probably not needed.
fn_type = slash_underlines fn_type;
doc_entries := { fn_name, libcall, url, fn_type } ! *doc_entries;
pfs;
};
# Write tex documentation table into file section-libref-xxx.tex:
#
fun write_section_libref_xxx_tex_table
#
(pfs: Pfs)
#
( field1: Doc_Entry -> String,
field2: Doc_Entry -> String,
to_section: Pfs -> String -> Pfs
)
=
{
# Define the sort order for the table:
#
fun compare_fn
( a: Doc_Entry,
b: Doc_Entry
)
=
{ a1 = field1 a; a2 = field2 a;
b1 = field1 b; b2 = field2 b;
# If primary keys are equal,
# sort on the secondary keys:
#
if (a1 != b1) a1 > b1;
else a2 > b2;
fi;
};
entries = sort compare_fn *doc_entries;
pfs = fold_forward
(\\\\ (entry, pfs)
=
{ entry -> { fn_name, libcall, url, fn_type };
#
entry1 = field1 entry;
entry2 = field2 entry;
pfs = if (entry1 != "")
to_section pfs
(sprintf "%s & %s & %s & %s \\\\\\\\ \\\\hline\\n"
entry1
entry2
(url == "" ?? ""
:: (.|\\ahref{\\url{| + url + "}}{doc}"))
fn_type
);
else
pfs;
fi;
pfs;
}
)
pfs # Initial value of result.
entries # Iterate over this list.
;
pfs;
};
#
fun build_fun_header_for__'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) (fn_name, args)
=
{ pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "\\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "static void\\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs ("do__" + fn_name + "( int argc, unsigned char** argv )\\n");
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "{\\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " check_argc( \\"do__%s\\", %d, argc );\\n" fn_name args);
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "\\n";
pfs;
};
# Build C code
# to fetch all the arguments
# out of argc/argv:
#
fun build_fun_arg_loads_for_'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) (fn_name, args, libcall)
=
{
pfs = for (a = 0, pfs = pfs; a < args; ++a; pfs) {
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
pfs = if (arg_type == "b") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " int b%d = bool_arg( argc, argv, %d );\\n" a a);
elif (arg_type == "f") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " double f%d = double_arg( argc, argv, %d );\\n" a a);
elif (arg_type == "i") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " int i%d = int_arg( argc, argv, %d );\\n" a a);
elif (arg_type == "s") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " char* s%d = string_arg( argc, argv, %d );\\n" a a);
else
case (sm::get (*arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c', arg_type)) # Custom library-specific arg type handling for "w" etc.
#
THE build_arg_load_fn => to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (build_arg_load_fn (arg_type, a, libcall));
#
NULL => raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #" + int::to_string a + " from libcall '" + libcall + "\\n");
esac;
fi;
pfs;
};
pfs;
};
# Synthesize a function for mythryl-xxx-library-in-c-subprocess.c like
#
# static void
# do__set_adjustment_value( int argc, unsigned char** argv )
# {
# check_argc( "do__make_label", 2, argc );
#
# { GtkAdjustment* w0 = (GtkAdjustment*) widget_arg( argc, argv, 0 );
# double f1 = double_arg( argc, argv, 1 );
#
# gtk_adjustment_set_value( GTK_ADJUSTMENT(w0), /*value*/f1);
# }
# }
#
fun build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result # E.g., "Float"
)
=
{ to = to_mythryl_xxx_library_in_c_subprocess_c_funs;
#
arg_count = count_args libcall;
pfs = build_fun_header_for__'mythryl_xxx_library_in_c_subprocess_c' pfs (fn_name, arg_count);
pfs = build_fun_arg_loads_for_'mythryl_xxx_library_in_c_subprocess_c' pfs (fn_name, arg_count, libcall);
libcall_more
=
case (maybe_get_field (fields, "libcal+")) THE field => field;
NULL => "";
esac;
pfs = case result
#
"Void"
=>
{ # Now we just print
# the supplied gtk call
# and wrap up:
#
pfs = to pfs "\\n";
pfs = to pfs (" " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "}\\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\\n");
pfs;
};
"Bool"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" int result = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs (" printf( \\"" + fn_name + "%d\\\\n\\", result); fflush( stdout );\\n");
pfs = to pfs (" fprintf(log_fd, \\"SENT: " + fn_name + "%d\\\\n\\", result); fflush( log_fd );\\n");
pfs = to pfs "}\\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\\n");
pfs;
};
"Float"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" double result = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs (" printf( \\"" + fn_name + "%f\\\\n\\", result); fflush( stdout );\\n");
pfs = to pfs (" fprintf(log_fd, \\"SENT: " + fn_name + "%f\\\\n\\", result); fflush( log_fd );\\n");
pfs = to pfs "}\\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\\n");
pfs;
};
"Int"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" int result = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs (" printf( \\"" + fn_name + "%d\\\\n\\", result); fflush( stdout );\\n");
pfs = to pfs (" fprintf(log_fd, \\"SENT: " + fn_name + "%d\\\\n\\", result); fflush( log_fd );\\n");
pfs = to pfs "}\\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\\n");
pfs;
};
_ => case (sm::get (*nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c', result)) # Custom library-specific arg type handling for "Widget", "new Widget" etc.
#
THE build_fn => build_fn pfs { fn_name, libcall, libcall_more, to_mythryl_xxx_library_in_c_subprocess_c_funs, path };
NULL => raise exception DIE (sprintf "Unsupported result type '%s'" result);
esac;
esac;
plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c'
:=
*plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c'
+ 1;
pfs;
};
#
fun build_fun_header_for__'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type, args, libcall, result_type)
=
{
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
# C comments don't nest, so we must change
# any C comments in input_type or output_type:
#
input_type = regex::replace_all .|/\\*| "(*" input_type;
input_type = regex::replace_all .|\\*/| "*)" input_type;
#
output_type = regex::replace_all .|/\\*| "(*" output_type;
output_type = regex::replace_all .|\\*/| "*)" output_type;
pfs = to_libmythryl_xxx_c_funs pfs ("/* do__" + fn_name + "\\n");
pfs = to_libmythryl_xxx_c_funs pfs " *\\n";
pfs = to_libmythryl_xxx_c_funs pfs (" * " + (basename path.xxx_client_api) + " type: " + ( fn_type =~ ./^\\(/ ?? "" :: " ") + fn_type + "\\n");
pfs = to_libmythryl_xxx_c_funs pfs (" * " + (basename path.xxx_client_driver_api) + " type: " + (input_type =~ ./^\\(/ ?? "" :: " ") + input_type + " -> " + output_type + "\\n");
pfs = to_libmythryl_xxx_c_funs pfs " */\\n";
pfs = to_libmythryl_xxx_c_funs pfs ("static Val do__" + fn_name + " (Task* task, Val arg)\\n");
pfs = to_libmythryl_xxx_c_funs pfs "{\\n";
pfs = to_libmythryl_xxx_c_funs pfs "\\n";
pfs;
};
#
fun build_fun_trailer_for__'libmythryl_xxx_c' pfs
=
{
pfs = to_libmythryl_xxx_c_funs pfs "}\\n";
pfs = to_libmythryl_xxx_c_funs pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: write_libmythryl_xxx_c_plain_fun per " + path.construction_plan + ". */\\n");
pfs = to_libmythryl_xxx_c_funs pfs "\\n";
pfs = to_libmythryl_xxx_c_funs pfs "\\n";
pfs;
};
# Build C code
# to fetch all the arguments
# out of argc/argv:
#
fun build_fun_arg_loads_for__'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type, args, libcall)
=
{
case args
0 => pfs;
# Having just one argument used to be a special case
# because then we passed the argument directly rather
# than packed within a tuple. But the first argument
# to a gtk-client-driver-for-library-in-main-process.pkg function is always a Session,
# and it is more efficient to pass on the tuple from
# that layer to the mythryl-gtk-library-in-main-process.c layer rather than
# unpacking and repacking just to get rid of the Session
# argument, consequently if we have any arguments of
# interest (i.e., non-Session arguments) at this point
# we will always have a tuple, eliminating the special
# case. I've left this code here, commented out, just
# in case this situation changes and it is needed again:
#
#
# 1 => { arg_type = get_nth_arg_type( 0, libcall );
#
# if (arg_type == "b") to_libmythryl_xxx_c_funs " int b0 = TAGGED_INT_TO_C_INT(arg) == HEAP_TRUE;\\n";
# elif (arg_type == "f") to_libmythryl_xxx_c_funs " double f0 = *(PTR_CAST(double*, arg));\\n";
# elif (arg_type == "i") to_libmythryl_xxx_c_funs " int i0 = TAGGED_INT_TO_C_INT(arg);\\n";
# elif (arg_type == "s") to_libmythryl_xxx_c_funs " char* s0 = HEAP_STRING_AS_C_STRING(arg);\\n";
# elif (arg_type == "w")
#
# # Usually we fetch a widget as just
# #
# # GtkWidget* widget = widget[ TAGGED_INT_TO_C_INT(arg) ];
# #
# # or such, but in a few cases we must cast to
# # another type:
# # o If we see GTK_ADJUSTMENT(w0) we must do GtkAdjustment* w0 = (GtkAdjustment*) widget[ TAGGED_INT_TO_C_INT(arg) ];
# # o If we see GTK_SCALE(w0) we must do GtkScale* w0 = (GtkScale*) widget[ TAGGED_INT_TO_C_INT(arg) ];
# # o If we wee GTK_RADIO_BUTTON(w0) we must do GtkRadioButton* w0 = (GtkRadioButton*) widget[ TAGGED_INT_TO_C_INT(arg) ];
#
# widget_type = REF "GtkWidget";
#
# if (libcall =~ ./GTK_ADJUSTMENT\\(\\s*w0\\s*\\)/) widget_type := "GtkAdjustment";
# elif (libcall =~ ./GTK_SCALE\\(\\s*w0\\s*\\)/) widget_type := "GtkScale";
# elif (libcall =~ ./GTK_RADIO_BUTTON\\(\\s*w0\\s*\\)/) widget_type := "GtkRadioButton";
# fi;
#
# to_libmythryl_xxx_c_funs (sprintf " %-14s w0 = %-16s widget[ TAGGED_INT_TO_C_INT(arg) ];\\n"
# (*widget_type + "*")
# ("(" + *widget_type + "*)")
# );
#
# else
# raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #0 from libcall '" + libcall + "\\n");
# fi;
# };
_ => { if (args < 0) die_x "build_fun_arg_loads_for__'libmythryl_xxx_c': Negative 'args' value not supported."; fi;
#
pfs = for (a = 0, pfs = pfs; a < args; ++a; pfs) {
#
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
pfs = if (arg_type == "b") to_libmythryl_xxx_c_funs pfs (sprintf " int b%d = GET_TUPLE_SLOT_AS_VAL( arg, %d) == HEAP_TRUE;\\n" a (a+1)); # +1 because 1st arg is always Session.
elif (arg_type == "f") to_libmythryl_xxx_c_funs pfs (sprintf " double f%d = *(PTR_CAST(double*, GET_TUPLE_SLOT_AS_VAL( arg, %d)));\\n" a (a+1));
elif (arg_type == "i") to_libmythryl_xxx_c_funs pfs (sprintf " int i%d = GET_TUPLE_SLOT_AS_INT( arg, %d);\\n" a (a+1));
elif (arg_type == "s") to_libmythryl_xxx_c_funs pfs (sprintf " char* s%d = HEAP_STRING_AS_C_STRING (GET_TUPLE_SLOT_AS_VAL( arg, %d));\\n" a (a+1));
else
case (sm::get (*arg_load_fns_for_'libmythryl_xxx_c', arg_type)) # Custom library-specific arg type handling for "w" etc.
#
THE build_arg_load_fn => to_libmythryl_xxx_c_funs pfs (build_arg_load_fn (arg_type, a, libcall));
#
NULL => raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #" + int::to_string a + " from libcall '" + libcall + "\\n");
esac;
fi;
pfs;
};
pfs;
};
esac;
};
#
fun build_fun_body_for__'libmythryl_xxx_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result_type # E.g., "Float"
)
=
{
to = to_libmythryl_xxx_c_funs;
libcall_more
=
case (maybe_get_field (fields, "libcal+")) THE field => field;
NULL => "";
esac;
pfs = case result_type
#
"Void"
=>
{ # Now we just print
# the supplied gtk call
# and wrap up:
#
pfs = to pfs "\\n";
pfs = to pfs (" " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs " return HEAP_VOID;\\n";
#
pfs;
};
"Bool"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" int result = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs " return result ? HEAP_TRUE : HEAP_FALSE;\\n";
#
pfs;
};
"Float"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" double d = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs " return make_float64(task, d );\\n";
#
pfs;
};
"Int"
=>
{ pfs = to pfs "\\n";
pfs = to pfs (" int result = " + libcall + ";\\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\\n";
pfs = to pfs " return TAGGED_INT_FROM_C_INT(result);\\n";
#
pfs;
};
_ => case (sm::get (*nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c', result_type)) # Custom library-specific arg type handling for "Widget", "new Widget" etc.
#
THE build_fn => build_fn pfs { fn_name, libcall, libcall_more, to_libmythryl_xxx_c_funs, path };
#
NULL => raise exception DIE (sprintf "Unsupported result type '%s'" result_type);
esac;
esac;
pfs;
};
# Synthesize a function for libmythryl-xxx.c like
#
# /* do__gtk_init : Void -> Void
# *
# *
# */
#
# static Val do__gtk_init (Task* task, Val arg)
# {
# int y = INT1_LIB7toC( GET_TUPLE_SLOT_AS_INT(arg, 0) );
# char *symname = HEAP_STRING_AS_C_STRING( GET_TUPLE_SLOT_AS_VAL(arg, 1) );
# int lazy = GET_TUPLE_SLOT_AS_VAL(arg, 2) == HEAP_TRUE;
#
# int result = move( y, x );
#
# if (result == ERR) return RAISE_ERROR__MAY_HEAPCLEAN(task, "move", NULL);
#
# return HEAP_VOID;
# }
#
#
#
# Cheatsheet:
#
# Accepting a lone float arg:
# double d = *(PTR_CAST(double*, arg)); # Example in src/c/lib/math/cos64.c
#
# Accepting a lone int arg:
# int socket = TAGGED_INT_TO_C_INT(arg); # Example in src/c/lib/socket/accept.c
#
# Accepting a lone string arg: # Example in src/c/lib/posix-file-system/readlink.c
# char* path = HEAP_STRING_AS_C_STRING(arg);
#
# Accepting a lone Null_Or( Tuple ) arg: # Example in src/c/lib/socket/get-protocol-by-name.c
#
# Accepting a Bool from a tuple: # Example in src/c/lib/dynamic-loading/dlopen.c
# int lazy = GET_TUPLE_SLOT_AS_VAL (arg, 1) == HEAP_TRUE;
#
# Accepting an Int from a tuple: # Example in src/c/lib/posix-file-system/fchown.c
# int fd = GET_TUPLE_SLOT_AS_INT (arg, 0);
#
# Accepting a String from a tuple: # Example in src/c/lib/dynamic-loading/dlsym.c
# char *symname = HEAP_STRING_AS_C_STRING (GET_TUPLE_SLOT_AS_VAL (arg, 1));
#
# Accepting a Float from a tuple: # THIS IS MY OWN GUESS!
# double d = *(PTR_CAST(double*, GET_TUPLE_SLOT_AS_VAL(arg,%d)));
#
# Accepting a Null_Or(String) from a tuple: # Example in src/c/lib/dynamic-loading/dlopen.c
#
#
# Returning
#
# Void: return HEAP_VOID; # Defined in src/c/h/runtime-values.h
# TRUE: return HEAP_TRUE; # Defined in src/c/h/runtime-values.h
# FALSE: return HEAP_FALSE; # Defined in src/c/h/runtime-values.h
# Int: return TAGGED_INT_FROM_C_INT(size); # Defined in src/c/h/runtime-values.h
# NULL: return OPTION_NULL; # Defined in src/c/h/make-strings-and-vectors-etc.h Example in src/c/machine-dependent/interprocess-signals.c
# THE foo: return OPTION_THE(task, foo); # Defined in src/c/h/make-strings-and-vectors-etc.h
# # Example in src/c/machine-dependent/interprocess-signals.c
#
# Returning a float:
# return make_float64(task, cos(d) ); # Defined in src/c/h/make-strings-and-vectors-etc.h
#
# Returning a string:
# Val result = allocate_nonempty_ascii_string__may_heapclean(task, size, NULL);
# strncpy (HEAP_STRING_AS_C_STRING(result), buf, size);
# return result;
#
# Returning a tuple: # Example from src/c/lib/date/gmtime.c
#
# set_slot_in_nascent_heapchunk(task, 0, MAKE_TAGWORD(PAIRS_AND_RECORDS_BTAG, 9));
# set_slot_in_nascent_heapchunk(task, 1, TAGGED_INT_FROM_C_INT(tm->tm_sec));
# ...
# set_slot_in_nascent_heapchunk(task, 9, TAGGED_INT_FROM_C_INT(tm->tm_isdst));
#
# return commit_nascent_heapchunk(task, 9);
#
#
# Return functions which check ERR
# and optionally raise an exception: src/c/lib/raise-error.h
#
# CHK_RETURN_VAL(task, status, val) Check status for an error (< 0); if okay,
# then return val. Otherwise raise
# SYSTEM_ERROR with the appropriate system
# error message.
#
# CHK_RETURN(task, status) Check status for an error (< 0); if okay,
# then return it as the result (after
# converting to an Lib7 int).
#
# CHK_RETURN_UNIT(task, status) Check status for an error (< 0); if okay,
# then return Void.
#
# GET_TUPLE_SLOT_AS_VAL &Co are from: src/c/h/runtime-values.h
# allocate_nonempty_ascii_string__may_heapclean is from: src/c/h/make-strings-and-vectors-etc.h
# CHK_RETURN_VAL &Co are from: src/c/lib/raise-error.h
#
fun build_plain_fun_for_'libmythryl_xxx_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result_type # E.g., "Float"
)
=
{ arg_count = count_args( libcall );
#
pfs = build_fun_header_for__'libmythryl_xxx_c' pfs ( fn_name, fn_type, arg_count, libcall, result_type);
pfs = build_fun_arg_loads_for__'libmythryl_xxx_c' pfs ( fn_name, fn_type, arg_count, libcall);
pfs = build_fun_body_for__'libmythryl_xxx_c' pfs (x, fields, fn_name, fn_type, libcall, result_type);
pfs = build_fun_trailer_for__'libmythryl_xxx_c' pfs;
plain_fns_codebuilt_for_'libmythryl_xxx_c'
:=
*plain_fns_codebuilt_for_'libmythryl_xxx_c'
+ 1;
pfs;
};
# Given a libcall like "gtk_foo( /*bar_to_int bar*/i0, /*zot*/i1 )"
# and a parameter name like "i0" or "i1"
# return nickname like "bar_to_int bar" or "zot"
# if available, else "i0" or "i1":
#
fun arg_name (arg, libcall)
=
{ regex = .|/\\*([A-Za-z0-9_' ]+)\\*/| + arg; # Something like: /*([A-Za-z0-9_' ]+)*/f0
#
case (regex::find_first_match_to_ith_group 1 regex libcall)
THE x => x;
NULL => arg;
esac;
};
# Given a libcall like "gtk_foo( /*bar_to_int bar*/i0, /*zot*/i1 )"
# and a parameter name like "i0" or "i1"
# return nickname like "bar" or "zot"
# if available, else "i0" or "i1":
#
fun param_name (arg, libcall)
=
{ regex = .|/\\*([A-Za-z0-9_' ]+)\\*/| + arg; # Something like: /*([A-Za-z0-9_' ]+)*/f0
#
case (regex::find_first_match_to_ith_group 1 regex libcall)
#
THE name => # If 'name' contains blanks, we want
# only the part after the last blank:
#
case (regex::find_first_match_to_ith_group 1 .|^[:A-Za-z0-9_' ]+ ([A-Za-z0-9_']+)$| name)
THE x => x;
NULL => name;
esac;
NULL => arg;
esac;
};
# Synthesize a function for gtk-client-g.pkg like
#
# #
# fun make_vertical_scale_with_range (session: Session, min, max, step)
# =
# drv::make_vertical_scale_with_range (session.subsession, min, max, step);
#
fun build_plain_fun_for_'xxx_client_g_pkg' (pfs: Pfs) (x: Builder_Stuff, fields: Fields, fn_name, libcall)
=
case (maybe_get_field (fields, "cg-funs"))
#
THE field
=>
{ pfs = to_xxx_client_g_pkg_funs pfs " #\\n";
pfs = to_xxx_client_g_pkg_funs pfs field;
pfs = to_xxx_client_g_pkg_funs pfs " \\n";
pfs = to_xxx_client_g_pkg_funs pfs " # Above function handbuilt via src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'xxx_client_g_pkg'.\\n";
pfs = to_xxx_client_g_pkg_funs pfs "\\n";
plain_fns_handbuilt_for_'xxx_client_g_pkg'
:=
*plain_fns_handbuilt_for_'xxx_client_g_pkg' + 1;
pfs;
};
NULL =>
{
arg_count = count_args( libcall );
#
fun make_args pfs get_name # get_name will be arg_name or param_name.
=
{
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
arg = sprintf "%s%d" arg_type a;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf ", %s" (get_name (arg, libcall)));
pfs;
};
pfs;
};
# Select between foo (session.subsession, bar, zot);
# foo { session.subsession, bar, zot };
#
my (lparen, rparen)
=
# It is a poor idea to have xxx-client-g.pkg functions
# with multiple arguments of the same type use
# argument tuples, because it is too easy to
# mis-order such arguments, and the compiler
# type checking won't flag it -- in such cases
# it is better to use argument records:
#
arg_types_are_all_unique libcall
?? ( "(" , ")" )
:: ( "{ ", " }" );
pfs = to_xxx_client_g_pkg_funs pfs "\\n";
pfs = to_xxx_client_g_pkg_funs pfs " #\\n";
pfs = to_xxx_client_g_pkg_funs pfs " fun ";
pfs = to_xxx_client_g_pkg_funs pfs fn_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf " %ssession: Session" lparen);
pfs = make_args pfs param_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf "%s\\n" rparen);
# Select between drv::foo session.subsession;
# drv::foo (session.subsession, bar, zot);
#
my (lparen, rparen)
=
arg_count == 0
?? (" ", "" )
:: ("(", ")");
fn_name = regex::replace_all ./'/ "2" fn_name; # Primes don't work in C!
pfs = to_xxx_client_g_pkg_funs pfs " =\\n";
pfs = to_xxx_client_g_pkg_funs pfs (sprintf " drv::%s %ssession.subsession" fn_name lparen);
pfs = make_args pfs arg_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf "%s;\\n" rparen);
pfs = to_xxx_client_g_pkg_funs pfs " \\n";
pfs = to_xxx_client_g_pkg_funs pfs (" # Above function autobuilt by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'xxx_client_g_pkg' per " + path.construction_plan + ".\\n");
pfs = to_xxx_client_g_pkg_funs pfs "\\n";
plain_fns_codebuilt_for_'xxx_client_g_pkg'
:=
*plain_fns_codebuilt_for_'xxx_client_g_pkg'
+ 1;
pfs;
};
esac;
# Synthesize a xxx-client.api line like
#
# make_window: Session -> Widget;
#
stipulate
line_count = REF 2;
herein
#
fun build_fun_declaration_for_'xxx_client_api' (pfs: Pfs) { fn_name, fn_type, api_doc }
=
{
# Add a blank line every three declarations:
#
line_count := *line_count + 1;
pfs = if ((*line_count % 3) == 0)
#
to_xxx_client_api_funs pfs "\\n";
else
pfs;
fi;
# The 'if' here is just to exdent by one char
# types starting with a paren, so that we get
#
# foo: Session -> Void;
# bar: (Session, Widget) -> Void;
#
# rather than the slightly rattier looking
#
# foo: Session -> Void;
# bar: (Session, Widget) -> Void;
#
pfs = if (fn_type =~ ./^\\(/) to_xxx_client_api_funs pfs (sprintf " %-40s%s;\\n" (fn_name + ":") fn_type);
else to_xxx_client_api_funs pfs (sprintf " %-41s%s;\\n" (fn_name + ":") fn_type);
fi;
pfs = if (api_doc != "") to_xxx_client_api_funs pfs api_doc;
else pfs;
fi;
pfs;
};
end;
#
fun figure_function_result_type (x: Builder_Stuff, fields: Fields, fn_name, fn_type)
=
# result_type can be "Int", "String", "Bool", "Float" or "Void".
#
# It can also be "Widget" or "new Widget", the difference being
# that in the former case the mythryl-xxx-library-in-c-subprocess.c logic can merely
# fetch it out of its array widget[], whereas in the latter a
# new entry is being created in widget[].
#
# We can usually deduce the difference: If fn_name starts with
# "make_" then we have the "new Widget" case, otherwise we have
# the "Widget" case:
#
case (maybe_get_field (fields, "result"))
#
THE string => string;
#
NULL =>
# Pick off terminal " -> Void"
# or whatever from fn_type
# and switch on it:
#
case (regex::find_first_match_to_ith_group 1 ./->\\s*([A-Za-z_']+)\\s*$/ fn_type)
#
THE "Bool" => "Bool";
THE "Float" => "Float";
THE "Int" => "Int";
THE "String" => "String";
THE "Void" => "Void";
THE result_type => case (sm::get (*figure_function_result_type_fns, result_type)) # Cf figure_function_result_type in src/opt/gtk/sh/make-gtk-glue
#
THE function => function fn_name; # E.g., "Widget" -> ("Widget" or "new Widget")
#
NULL => { printf "SupporTed result types:\\n";
print_strings (sm::keys_list *figure_function_result_type_fns);
die_x(sprintf "Unsupported result fn-type %s in type %s at %s..\\n"
result_type
fn_type
(get_field_location (fields, "fn-type"))
);
};
esac;
NULL => die_x(sprintf "UNsupported result fn-type %s at %s..\\n"
fn_type
(get_field_location (fields, "fn-type"))
);
esac;
esac;
#
fun build_plain_function { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
fn_name = get_field (fields, "fn-name"); # E.g., "make_window".
fn_type = get_field (fields, "fn-type"); # E.g., "Session -> Widget".
libcall = get_field (fields, "libcall"); # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
url = case (maybe_get_field(fields,"url")) THE field => field; NULL => ""; esac;
api_doc = case (maybe_get_field(fields,"api-doc")) THE field => field; NULL => ""; esac;
c_fn_name = regex::replace_all ./'/ "2" fn_name; # C fn names cannot contain apostrophes.
result_type = figure_function_result_type (x, fields, fn_name, fn_type);
pfs = build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c' pfs ( c_fn_name );
pfs = build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' pfs (x, fields, c_fn_name, fn_type, libcall, result_type);
pfs = build_plain_fun_for_'libmythryl_xxx_c' pfs (x, fields, c_fn_name, fn_type, libcall, result_type);
pfs = build_table_entry_for_'libmythryl_xxx_c' pfs (c_fn_name, fn_type);
pfs = note__section_libref_xxx_tex__entry pfs { fields, fn_name, libcall, url, fn_type };
pfs = build_fun_declaration_for_'xxx_client_driver_api' pfs { c_fn_name, libcall, result_type };
pfs = build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg' pfs { c_fn_name, libcall, result_type };
pfs = build_fun_declaration_for_'xxx_client_api' pfs { fn_name, fn_type, api_doc };
pfs = build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg' pfs { fn_name, c_fn_name, fn_type, libcall, result_type };
pfs = build_plain_fun_for_'xxx_client_g_pkg' pfs (x, fields, fn_name, libcall);
pfs;
};
#
fun build_function_doc { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
url = case (maybe_get_field(fields,"url"))
#
THE field => field;
NULL => "";
esac;
fn_name = get_field(fields, "fn-name"); # "make_window" or such.
fn_type = get_field(fields, "fn-type"); # "Session -> Widget" or such.
pfs = note__section_libref_xxx_tex__entry pfs { fields, fn_name, libcall => "", url, fn_type };
pfs;
};
#
fun build_mythryl_type { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
type = get_field(fields, "cg-typs");
#
pfs = to_xxx_client_api_types pfs type;
pfs = to_xxx_client_g_pkg_types pfs type;
pfs;
};
#
fun build_mythryl_code { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
code = get_field(fields, "cg-funs");
#
pfs = to_xxx_client_g_pkg_funs pfs code;
pfs;
};
fn_doc__definition
=
{ name => "fn_doc",
do => build_function_doc,
fields => [ { fieldname => "fn-name", traits => [] },
{ fieldname => "fn-type", traits => [] },
{ fieldname => "doc-fn", traits => [ plf::OPTIONAL ] },
{ fieldname => "url", traits => [ plf::OPTIONAL ] }
]
};
plain_fn__definition
=
{ name => "plain_fn",
do => build_plain_function,
fields => [ { fieldname => "fn-name", traits => [] },
{ fieldname => "fn-type", traits => [] },
{ fieldname => "libcall", traits => [] },
{ fieldname => "libcal+", traits => [ plf::OPTIONAL, plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] },
{ fieldname => "lowtype", traits => [ plf::OPTIONAL ] },
{ fieldname => "result", traits => [ plf::OPTIONAL ] },
{ fieldname => "api-doc", traits => [ plf::OPTIONAL ] },
{ fieldname => "doc-fn", traits => [ plf::OPTIONAL ] },
{ fieldname => "url", traits => [ plf::OPTIONAL ] },
{ fieldname => "cg-funs", traits => [ plf::OPTIONAL, plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
mythryl_code__definition
=
{ name => "mythryl_code",
do => build_mythryl_code,
fields => [ { fieldname => "cg-funs", traits => [ plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
mythryl_type__definition
=
{ name => "mythryl_type",
do => build_mythryl_type,
fields => [ { fieldname => "cg-typs", traits => [ plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
builder_stuff = { path,
#
maybe_get_field,
get_field,
get_field_location,
#
build_table_entry_for_'libmythryl_xxx_c',
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c',
#
build_fun_declaration_for_'xxx_client_api',
build_fun_declaration_for_'xxx_client_driver_api',
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg',
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg',
to_xxx_client_driver_api,
to_xxx_client_driver_for_library_in_c_subprocess_pkg,
to_xxx_client_driver_for_library_in_main_process_pkg,
to_xxx_client_g_pkg_funs,
to_xxx_client_g_pkg_types,
to_xxx_client_api_funs,
to_xxx_client_api_types,
to_mythryl_xxx_library_in_c_subprocess_c_funs,
to_mythryl_xxx_library_in_c_subprocess_c_trie,
to_libmythryl_xxx_c_table,
to_libmythryl_xxx_c_funs,
to_section_libref_xxx_tex_apitable,
to_section_libref_xxx_tex_libtable,
custom_fns_codebuilt_for_'libmythryl_xxx_c',
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c',
callback_fns_handbuilt_for_'xxx_client_g_pkg',
note__section_libref_xxx_tex__entry
};
paragraph_defs
=
plf::digest_paragraph_definitions sm::empty "make-library-glue.pkg"
#
( paragraph_definitions
@
[
fn_doc__definition,
plain_fn__definition,
mythryl_code__definition,
mythryl_type__definition
]
);
end;
};
end;
###################################################################################
# Note[1]: Format of xxx-construction.plan files
#
# These notes are outdated; should look at
# src/lib/make-library-glue/planfile.api
# and
# *__definition
# above. Should write more docs, too. :-)
#
#
# An xxx-construction.plan file is broken
# into logical paragraphs separated by blank lines.
#
# In general each paragraph describes one end-user-callable
# function in (say) the Gtk API.
#
# Each paragraph consists of one or more lines;
# each line begins with a colon-delimited type
# field determining its semantics.
#
# Supported line types are:
#
# do: Must appear in every paragraph.
# Determines which make-library-glue function processes the paragraph:
# plain_fn build_plain_function # The usual case.
# callback_fn build_callback_function # Special-purpose variant.
# fn_doc build_function_doc # Document fn without code generation, e.g. for Mythryl-only fns.
# mythryl_code build_mythryl_code # Special hack to deposit verbatim Mythryl code.
# mythryl_type build_mythryl_type # Special hack to deposit verbatim Mythryl declarations.
#
# The 'do' line determines which other
# lines may appear in the paragraph, per the
# following table. ("X" == mandatory, "O" == optional):
#
# callback_fn fn_doc plain_fn mythryl_code mythryl_type
# ----------- ------ -------- ------------ -----------
#
# fn-name: X X X
# fn-type: X X X
# lowtype: X X
# libcall: X
# libcal+: O
# result: O
# api-doc: O
# doc-fn: O O O
# url: O O O
# cg-funs: O O X
# cg-typs: X
#
#
# fn-name: Name of the end-user-callable Mythryl function, e.g. halt_and_catch_fire
# fn-type: Mythryl type for the function, e.g. Int -> Void
# url: URL documenting the underlying C Gtk function, e.g. http://library.gnome.org/devel/gtk/stable/gtk-General.html#gtk-init
# cg-funs: Literal Mythryl code to be inserted near bottom of xxx-client-g.pkg
# cg-typs: Literal Mythryl code to be inserted near top of xxx-client-g.pkg and also in xxx-client.api
# lowtype: Gtk cast macro for widget: Usually G_OBJECT, occasionally GTK_MENU_ITEM or such.
#
# doc-fn: Usually name of fn for documentation purposes is obtained from 'libcall' line,
# but this line may be used to specify it explicitly.
#
# api-doc: Comment line(s) to be appended to fn declaration in xxx-client.api.
#
# libcall: C-level library call to make e.g. gtk_layout_put( GTK_LAYOUT(w0), GTK_WIDGET(w1), i2, i3)
#
# libcall contains embedded arguments like w0, i1, f2, b3, s4.
#
# The argument letter gives us the argument type:
#
# w == widget
# i == int
# f == double (Mythryl "Float")
# b == bool
# s == string
#
# The argument digit gives us the argument order:
#
# 0 == first arg
# 1 == second arg
# ...
#
# libcal+: More code to be inserted immediately after the 'libcall' code
# in libmythryl-xxx.c and mythryl-xxx-library-in-c-subprocess.c.
#
# result: C-level result type for call. In practice we always default
# this and make-library-glue deduces it from the Mythryl type.
# #
# Can be one of "Int", "String", "Bool", "Float" or "Void".
# #
# Can also be "Widget" or "new Widget", the difference being
# that in the former case the mythryl-gtk-server.c logic can merely
# fetch it out of its array widget[], whereas in the latter a
# new entry is being created in widget[].
# #
# We can usually deduce the difference: If fn_name starts with
# "make_" then we have the "new Widget" case, otherwise we have
# the "Widget" case:
##########################################################################
# The following is support for outline-minor-mode in emacs. #
# ^C @ ^T hides all Text. (Leaves all headings.) #
# ^C @ ^A shows All of file. #
# ^C @ ^Q Quickfolds entire file. (Leaves only top-level headings.) #
# ^C @ ^I shows Immediate children of node. #
# ^C @ ^S Shows all of a node. #
# ^C @ ^D hiDes all of a node. #
# ^HFoutline-mode gives more details. #
# (Or do ^HI and read emacs:outline mode.) #
# #
# Local variables: #
# mode: outline-minor #
# outline-regexp: "[{ \\t]*\\\\(fun \\\\)" #
# End: #
##########################################################################
## Code by Jeff Prothero: Copyright (c) 2010-2015,
## released per terms of SMLNJ-COPYRIGHT.
|
Hormonal control of "tissue" transglutaminase induction during programmed cell death in frog liver.
In this study, we show that sex hormones (testosterone, estradiol, and progesterone) act as physiological modulators of programmed cell death (PCD) during the frog liver involution observed postvitellogenesis. PCD in parenchymal cells is paralleled by the specific induction of the "tissue" transglutaminase (tTG) gene. tTG protein specifically accumulates in hepatocytes showing the morphological features of apoptosis. The hormone-dependent increase of both PCD and tTG was reproduced in ovariectomized frogs. Treatment of castrated animals with testosterone, estradiol, and progesterone inhibited the induction of both tTG and PCD, thus indicating that in vivo the drop in the circulating sex hormone is the signal favoring the involution phase of the maternal frog liver after mating. Although an affinity-purified polyclonal antibody raised against mammalian transglutaminase reacts in frog liver with a 55- to 60-kDa protein, concomitant with the onset of PCD, tTG cleavage products were detected, suggesting a proteolytic processing of the enzyme protein. These results represent the first evidence indicating that the physiological involution occurring postvitellogenesis of frog liver takes place by programmed cell death and that this, together with the concomitant induction of tTG gene expression, is regulated by sex hormones. |
Behavioral treatment of attentional dysfunction in chronic, treatment-refractory schizophrenia.
Attentional impairment is both a core characteristic of schizophrenia and a factor in producing poor outcomes in rehabilitative treatment. While cognitive rehabilitation interventions have demonstrated some success, the severity of some patients' attentional impairment is such that they cannot attend to material in these treatments, leading to unsatisfactory outcomes. In this paper, we report on the results of a behavioral intervention designed to increase attention span in the lowest functioning group of schizophrenia patients on a long-term inpatient unit. The treatment is based on social-learning procedures, especially the principle of shaping. Results indicate that chronic, treatment refractory patients with severe attentional impairment, including those with IQs near or within the mentally retarded range, can improve their attention spans to over 45 minutes with this treatment. |
Computing devices using operating systems are increasingly popular and more and more electronic devices are being provided with operating systems. Computing devices using operating systems are not accessible to a user for normal operation until the operating system of the device is loaded and operating normally. To protect the operating system from failure, computing devices have various conditions which will trigger the pausing, stopping, and closing of the operating system. |
{
"title": "SETTINGS",
"groups": [
{
"title": "EDITOR",
"items": [
{
"title": "THEME",
"type": "list",
"key": "editor.theme",
"value": 38,
"items": [
"3024-day",
"3024-night",
"abcdef",
"ambiance-mobile",
"ambiance",
"base16-dark",
"base16-light",
"bespin",
"blackboard",
"cobalt",
"colorforth",
"darcula",
"dracula",
"duotone-dark",
"duotone-light",
"eclipse",
"elegant",
"erlang-dark",
"gruvbox-dark",
"hopscotch",
"icecoder",
"idea",
"isotope",
"lesser-dark",
"liquibyte",
"lucario",
"material-darker",
"material-ocean",
"material-palenight",
"material",
"mbo",
"mdn-like",
"midnight",
"monokai",
"moxer",
"neat",
"neo",
"night",
"nord",
"oceanic-next",
"panda-syntax",
"paraiso-dark",
"paraiso-light",
"pastel-on-dark",
"railscasts",
"rubyblue",
"seti",
"shadowfox",
"solarized",
"ssms",
"the-matrix",
"tomorrow-night-bright",
"tomorrow-night-eighties",
"ttcn",
"twilight",
"vibrant-ink",
"xq-dark",
"xq-light",
"yeti",
"yonce",
"zenburn"
]
},
{
"title": "FONT_NAME",
"type": "list",
"key": "editor.font.name",
"value": 0,
"items": [
"Menlo",
"Source Code Pro",
"Monaco",
"Iosevka",
"Ubuntu Mono",
"Hack",
"Cascadia Code"
]
},
{
"title": "FONT_SIZE",
"type": "number",
"key": "editor.font.size",
"value": 15
},
{
"title": "LINE_HEIGHT",
"type": "number",
"key": "editor.line.height",
"value": 20
},
{
"title": "SHOWS_LINE_NUMBERS",
"type": "boolean",
"key": "editor.line.numbers",
"value": false
},
{
"title": "SHOWS_INVISIBLES",
"type": "boolean",
"key": "editor.invisibles",
"value": false
},
{
"title": "RUN_PRETTIER",
"type": "script",
"value": "$app.notify({'name': 'prettify'})"
}
]
},
{
"title": "WINDOW",
"items": [
{
"title": "MAX_WIDTH",
"type": "string",
"key": "window.width.max",
"value": "150%"
},
{
"title": "HORIZONTAL_PADDING",
"type": "number",
"key": "window.padding.x",
"value": 10
},
{
"title": "VERTICAL_PADDING",
"type": "number",
"key": "window.padding.y",
"value": 10
},
{
"title": "SHADOW_OFFSET_X",
"type": "number",
"key": "window.shadow.x",
"value": 10
},
{
"title": "SHADOW_OFFSET_Y",
"type": "number",
"key": "window.shadow.y",
"value": 10
},
{
"title": "SHADOW_RADIUS",
"type": "number",
"key": "window.shadow.radius",
"value": 15
},
{
"title": "BACKGROUND_COLOR",
"type": "string",
"key": "window.bg.color",
"value": "#FFFFFF"
},
{
"title": "BACKGROUND_ALPHA",
"type": "number",
"key": "window.bg.alpha",
"value": 0
}
]
},
{
"title": "",
"items": [
{
"title": "ABOUT",
"type": "script",
"value": "require('scripts/readme').open();"
}
]
}
]
} |
Q:
Relationship between PDO class and PDOStatement class
I'm a php and mysql beginner, I'm currently self study PDO and confused some concepts:
$dbh = new PDO('mysql:host=localhost;dbname=test', $user, $pass);
$sql = "SELECT * FROM users";
$users = $dbh->query($sql);
1.What is the relationship between PDO class and PDOStatement class?
$dbh is the new object of class PDO, but why $users is the PDOStatement object? fetchAll() is the function inside class PDOStatement, but you can use it like this $users->fetchAll(), is $users a PDO or PDOStatement object?
2.Someone said $users is the cursor, once consumed, it won't rewind to the beginning of the resultset.
foreach ($users as $row) {
print $row["name"] . "<br/>";
}
but why you can use it in a foreach statement? foreach provides a way to iterate over arrays. what is cursor actually? is cursor a pointer?
3.For the pdostatement class, the doc said:
PDOStatement implements Traversable { ... }
why this class implements Traversable interface? is it empty interface?
Thank you for help!
A:
According to the documentation, the Traversable interface allows you to use the object into a foreach loop and it's only supposed to be used internally. Think of it as a convenient way of using the PDOStatement.
Basically, with PDO there is two ways to execute a query, one by using PDO::prepare() & PDOStatement::execute() and the other one by using PDO::query(). The later does prepare/execute in one call.
PDO::query() and PDO::execute() will not return the results on the other hand the PDOStatement object will allow you to specify the data you want to return. PDOStatement::fetchAll() will allow to define how you want to have your data organized.
It seems more complicated on first sight but it provides more flexibility.
|
It’s really that simple. We know they don’t. There have been extensive studies comparing groups of children who have been vaccinated with, say, the measles, mumps, and rubella (MMR) vaccine versus those who have not, and it’s very clear that there is no elevated rate of autism in the vaccinated children.
This simple truth is denied vigorously and vociferously by antivaxxers (those who oppose, usually rabidly, the use of vaccinations that prevent diseases), but they may as well deny the Earth is round and the sky is blue. It’s rock solid fact. They try to blame mercury in vaccines, but we know that mercury has nothing to do with autism; when thimerosal (a mercury compound) was removed from vaccines there was absolutely no change in the increase in autism rates.
I could go on and on. Virtually every claim made by antivaxxers is wrong. And this is a critically important issue; vaccines have literally saved hundreds of millions of lives. They save infants from potentially fatal but preventable diseases like pertussis and the flu.
I’m not exaggerating. The Committee on Oversight and Government Reform held a hearing trying to look into the cause and prevention of autism. Rep. Dan Burton (R-Ind.) launched into a several-minute diatribe (beginning at 12:58 in the video above) that starts off in an Orwellian statement: He claims he’s not antivax.
Then he launches into a five-minute speech that promotes long-debunked and clearly incorrect antivax claims, targeting mercury for the most part. Burton has long been an advocate for quackery; for at least a decade he has used Congressional situations like this to promote antiscience.
In the latest hearing, Burton sounds like a crackpot conspiracy theorist, to be honest, saying he knows—better than thousands of scientists who have spent their careers investigating these topics—that thimerosal causes neurological disorders (including autism).
He goes on for some time about mercury (as does Rep. Dennis Kucinich (D-Ohio) starting at 21:44 in the video), making it clear he doesn’t have a clue what he’s talking about. For example, very few vaccines still use mercury, and the ones that do use it in tiny amounts and in a form that does not accumulate in the body.
Talking about the danger of mercury in vaccines is like talking about the danger of having hydrogen—an explosive element!—in water. It’s nonsense.
It’s really that simple. We know they don’t. There have been extensive studies comparing groups of children who have been vaccinated with, say, the measles, mumps, and rubella (MMR) vaccine versus those who have not, and it’s very clear that there is no elevated rate of autism in the vaccinated children.
This simple truth is denied vigorously and vociferously by antivaxxers (those who oppose, usually rabidly, the use of vaccinations that prevent diseases), but they may as well deny the Earth is round and the sky is blue. It’s rock solid fact. They try to blame mercury in vaccines, but we know that mercury has nothing to do with autism; when thimerosal (a mercury compound) was removed from vaccines there was absolutely no change in the increase in autism rates.
I could go on and on. Virtually every claim made by antivaxxers is wrong. And this is a critically important issue; vaccines have literally saved hundreds of millions of lives. They save infants from potentially fatal but preventable diseases like pertussis and the flu.
I’m not exaggerating. The Committee on Oversight and Government Reform held a hearing trying to look into the cause and prevention of autism. Rep. Dan Burton (R-Ind.) launched into a several-minute diatribe (beginning at 12:58 in the video above) that starts off in an Orwellian statement: He claims he’s not antivax.
Then he launches into a five-minute speech that promotes long-debunked and clearly incorrect antivax claims, targeting mercury for the most part. Burton has long been an advocate for quackery; for at least a decade he has used Congressional situations like this to promote antiscience.
In the latest hearing, Burton sounds like a crackpot conspiracy theorist, to be honest, saying he knows—better than thousands of scientists who have spent their careers investigating these topics—that thimerosal causes neurological disorders (including autism).
He goes on for some time about mercury (as does Rep. Dennis Kucinich (D-Ohio) starting at 21:44 in the video), making it clear he doesn’t have a clue what he’s talking about. For example, very few vaccines still use mercury, and the ones that do use it in tiny amounts and in a form that does not accumulate in the body.
Talking about the danger of mercury in vaccines is like talking about the danger of having hydrogen—an explosive element!—in water. It’s nonsense. |
Dissociation of temporal and frontal components in the human auditory N1 wave: a scalp current density and dipole model analysis.
This study reports a combined scalp current density (SCD) and dipole model analysis of the N1 wave of the auditory event-related potentials evoked by 1 kHz tone bursts delivered every second. The SCD distributions revealed: (i) a sink and a source of current reversing in polarity at the inferotemporal level of each hemiscalp, compatible with neural generators in and around the supratemporal plane of the auditory cortex, as previously reported; and (ii) bilateral current sinks over frontal areas. Consistently, dynamic dipole model analysis showed that generators in and outside the auditory cortex are necessary to account for the observed current fields between 65 and 140 msec post stimulus. The frontal currents could originate from the motor cortex, the supplementary motor area and/or the cingulate gyrus. The dissociation of an exogenous, obligatory frontal component from the sensory-specific response in the auditory N1 suggests that parallel processes served by distinct neural systems are activated during acoustic stimulation. Implications for recent models of auditory processing are discussed. |
Ignition coil controlled ignition systems for automotive vehicles in which a transistor is serially connected with the ignition coil are well known; such systems usually include a control circuit which controls the transistor to conductive state, whereupon current will flow from a source, typically the on-board network of a vehicle, including a battery, through the coil to the transistor to ground or chassis. When a sufficient amount of electromagnetic energy has been stored in the coil, the transistor is suddenly controlled to blocked state, causing an inductive high-voltage surge in the secondary of the ignition coil which provides the energy for flash-over of the spark of a spark gap, for example a spark plug. A distributor can be interposed in the secondary between the coil and the spark plug for multi-cylinder internal combustion engines. Under certain operating conditions of the vehicle, it is possible that the voltage supply exceeds a given nominal value; current through the transistor in series with the coil then will become excessive, leading to damage, and possible destruction thereof. |
THN.com Blog: Weekend deadline would be more fan friendly
Nik Antropov was traded from the Maple Leafs to the Rangers at the trade deadline and recorded an assist in his first game. (Photo by Jim McIsaac/Getty Images)
Jason Kay
2009-03-06 09:33:00
Trade deadline day came in like a lamb and went out like a house cat (but a really angry one) and it was all good fun for those of us immersed in its machinations.
In Canada, TSN, Sportsnet and The Score devoted oodles of hours of TV time to the day, while their websites, and others such as NHL.com and THN.com, provided post-to-post, up-to-the-minute storylines, and then some.
Overhyped and over-covered? Perhaps. But, very clearly, there is an appetite for construction (and re-construction) in hockey fandom.
But what about those puck zealots not fortunate enough to have TV or Internet access midday, mid-week? You know, the Joes and Josephines working 9-to-5 who actually have to focus on their employment? Or the scores of students in school, champing to know what the Oilers got for Erik Cole.
On D-day in our Toronto office, we went down to the coffee shop and saw 15 guys craning their necks, trying to glimpse at the lone monitor in order to glean what had transpired at the deadline.
My 14-year-old son Noah, who very much wanted to watch this year’s event (but who doesn’t own a BlackBerry and wasn’t allowed to skip classes), came up with a solution: hold the trade deadline on a Sunday and make it a more inclusive made-for-TV event.
While not everyone is crazy about the idea of working on a Sunday, the idea has merit. Instantly, the prospective audience has deepened and widened from a demographics standpoint, making it a more appealing venture to the advertising and sponsorship community. Thousands more eyeballs would be glued to tubes, particularly in Canada, and website traffic would be bound to spike.
A Sunday afternoon in early March is perfect timing from a hockey fan’s perspective: the NFL is done, MLB is still early in spring training and the weather remains typically crummy in most of the hockey world, meaning the majority of us are indoors anyhow.
And the NHL schedule on Sundays is often light – or could be tailored accordingly – so the deadline wouldn’t have to conflict with a heavy slate of games.
Our Adam Proteau built on that suggestion, floating the idea of having it on all-star Sunday, prior to the game. It would mean an earlier deadline – a concept which has some support in the hockey community – but more importantly, an even more intense spotlight shining on the product. Those not really interested in the East vs. West score-a-thon would now have a new incentive.
For a league on the lookout for expanded revenue streams and increased fan devotion, Super Sunday, whether it’s in February or March, is an idea worth investigating.
Jason Kay is the editor in chief of The Hockey News and a regular contributor to THN.com. His blog appears every Friday.
For more great profiles, news and views from the world of hockey, Subscribe to The Hockey News magazine. |
County councillors will vote on whether to submit the business case for a new single council for Buckinghamshire to Government on Thursday, after it was backed by Cabinet.
Under the plans, all five county and district authorities would be abolished and replaced by a new Buckinghamshire Council, saving tax payers more than £18m a year.
Cabinet members who unanimously backed the proposals on Monday said the move would also empower local communities, simplify things for residents and give Bucks one strong voice on regional infrastructure issues.
Leader of the Council, Martin Tett, will now write to the four district councils seeking consensus from them on the conclusions of the draft business case.
On Thursday Full Council will be asked to approve the business case and authorise Martin to submit it to the Department for Communities and Local Government for a decision.
See futurebucks.co.uk for the latest information on the business case.
Selected quotes from Cabinet Members at Monday’s meeting
>>>Leader of the Council, Martin Tett, who represents the Little Chalfont and Amersham Common division, said: “This is a proposal to Government to disband all five councils and replace them with one and completely new council. It would cover the geography of the county but would actually be the best of the heritage of all the preceding ones and hopefully bring new innovations as well.
“We can deliver better services and better value for people and that’s not by cutting troops on the frontline. We don’t need 250 councillors for Buckinghamshire – that’s about half the size of the entire House of Commons to run Buckinghamshire. We can do it with less, we can do it with about 98.
“We don’t need chief executives for five councils. We don’t need senior staff or finance departments or HR departments for five councils, we can do it once rather than five times.
“This is a strong proposition and I hope desperately that our colleagues in the districts will work with us to try and find a degree of consensus around this proposal. I will certainly be writing to them straight after this meeting to make that offer to them.”
>>>Deputy Leader and Cabinet Member for Health and Wellbeing, Mike Appleyard, who represents The Wooburns, Bourne End and Hedsor, said: “What is important about some of the numbers in this report is they have all been independently checked and verified.”
He said that while there will clearly be some implementation costs, ‘we start making very significant savings in year three and we are saying that openly so people can understand we are being transparent’.
“We have put all of our documents into the public domain, we have told people what our assumptions are, we have told people what our brief is.”
>>>Cabinet Member for Transport, Mark Shaw, who represents Chesham, said: “If you were to organise local government in this country today would you really be looking at having three different levels of authority? When you talk to people they talk about their issues as what ‘The Council’ is going to do about it. How is ‘The Council’ going to help me? They don’t differentiate; they just want a one-stop shop and one person to go to.
“Someone who will listen to them, not a myriad of different people, different issues, different offices and different telephone numbers to contact.
“It strikes me that the most effective way we can move things forward is by having one new council for Buckinghamshire that caters for the wants and needs of people and helps them with their local issues.
“One of the key things that impressed me about this document is the key element of localism and enhancing these relationships with local people and local services.”
>>>Cabinet Member for Children’s Services, Lin Hazell, who represents Farnham Common and Burnham Beeches, said: “People come to me about an issue and they don’t know whether it’s a district or a parish or a county issue, but they just look on you as the local elected leader to resolve it. People don’t necessarily understand the difference in the structures and actually I don’t think they really care. They just want you to fix it if something goes wrong.”
She said people ‘want to feel confident they are masters of their own destiny’.
“I see this as the next step in empowering individuals locally and I think this will be a great success.”
>>>Cabinet Member for Planning and the Environment, Warren Whyte, who represents Buckingham East, said engagement sessions had been held with town and parish councils earlier in the summer, when concerns over localism and how ‘a county-wide unitary would affect local parishes and communities’ were raised.
But he added: “I think the proposal in front of us has dealt with that in a very robust way.”
He said planning decisions will be made locally and town and parish councils will be empowered if they wish to take up services.
He also highlighted proposals for community hubs: “In my division, Buckingham Library already has the nucleus of that idea and if it can be rolled out they will be a really powerful way of making sure all council services are seen locally.
“Most of our residents don’t care which council deals with it, they just want an efficient council to react to their concerns. This can only make it easier to do that. Our residents will get a better bang for their buck.”
>>>Cabinet Member for Education and Skills, Zahir Mohammed, who represents Booker, Cressex and Castlefield, said: “It is very important we are more local and more responsive and try to deliver better services for our residents. In my particular division we have five district councillors and one county councillor and it just makes so much sense if there was just one councillor who was dealing with all the queries our residents have.
“This is not just about financial savings but it works and stacks up non-financially as well. We have a growth agenda coming up, we have a demand for services, particularly around some of our statutory services, for example school place planning, and it requires the council to be a lot more agile in how it delivers services.”
>>>Cabinet Member for Resources, John Chilver, who represents Winslow, said: “On the financial side a unitary authority would deliver significant savings. It would reduce bureaucracy and deliver value for money for the tax payer, as well as a reduction in council tax rates for most residents through harmonisation.
“It will be easier for residents to have only one council to deal with. I myself have seen in the reception here people being redirected to the district council offices because their issue was dealt with there.
“Having a district council responsible for housing and a county council responsible for delivery of infrastructure does not lead to joined-up strategic clarity and a single unitary would ensure a more co-ordinated approach.”
He welcomed proposals for community boards and the local planning committees, particularly a separate planning committee for Winslow, Buckingham and the north of Aylesbury Vale.
Business case highlights
More LocalThe new Buckinghamshire Council would bring access to local services, accountability and decision-making direct to people’s doorsteps.The business case includes plans for:
Community Boards Nineteen Community Boards would serve Buckinghamshire’s towns and villages, enabling local councillors to take decisions on issues such as funding for community groups and local roads maintenance. They would meet regularly in each area and the public would be encouraged to attend alongside town and parish councils, police, fire, and health organisations.
Community HubsCommunity Hubs in each of the 19 Community Board areas would provide a base for a number of public services, including the new Buckinghamshire Council. It means residents, particularly vulnerable people who might be unable to travel very far, would be able to access a wide range of services from a place that is local to them – all under one roof.
Parish/ Town Delivery PartnershipParish and Town Councils would have the opportunity to take on more services and community assets if they choose to, from public toilets and parks to support for the isolated and footpath repairs.
Clearer accountability The number of councillors sitting on ‘principal’ authorities in Bucks would reduce from 238 to 98, saving £1.2m and delivering clearer local accountability.
Better Value
One council instead of five would save tax payers £18.2m a year by reducing the duplication which currently exists under such a bureaucratic system and delivering services much more efficiently and effectively.
For example, around £4m would be saved by combining the back-office functions of the five councils, such as HR and finance. £3.6m would be saved by running services more efficiently on a larger scale, with greater economies of scale. £3m would be saved by cutting the numbers of senior managers which previously existed across the five councils.
The money saved equates to more than £84 per household per year.
These savings are based on conservative estimates – in reality it is anticipated that actual savings will be significantly higher.
The new Buckinghamshire Council could also earn £48m from selling off council buildings which are no longer required. This money could be invested in improving infrastructure like roads and schools.
The one-off cost of establishing the new council would be £16.2m. It would take just over two years for the new council to pay for itself (savings would build up to £18.2m per annum by Year Three, not from day one).
Overall savings (once the cost of change is taken into account) for the first five years of a new council would be £45m.
Council tax Under one Buckinghamshire Council, council tax would be harmonised, so for example a Band D rate payer in Buckingham will pay the same as a Band D rate payer in Chesham. It would result in a reduction in council tax for the majority of Buckinghamshire’s residents. The level has been brought in line with the rate Wycombe District rate payers are expected to pay by 2019, which is the lowest of all the districts in Bucks. The cost of equalising council tax would be £2.2m.
Better quality servicesServices which complement one another but are currently divided between the districts and county can also be brought together. This will result in better services for residents.
For example:• Services which aim to help people at risk of addiction, obesity or ill health (currently County) can be brought together with alcohol licensing, housing, leisure centres and environmental health (currently District).• The Districts’ bin collection and street cleaning roles can be merged with the County’s waste disposal services, such as its household waste and recycling centres, landfill sites and energy-from-waste plant, which will enhance recycling rates and efficiencies.• There would be one council responsible for planning for new homes (District) and infrastructure such as schools, broadband and roads (County), creating a much more coherent approach to housing growth throughout Buckinghamshire which should result in more sustainable development.• Trading Standards (County) and Environmental Health (District) can be brought together, creating a one-stop shop for key consumer protection services.• There would be an improved service for people with disabilities through the joining up of assessments and grants (County) with benefits, housing and planning applications (District).• If one council had responsibility for both fostering (County) and housing stock (District), there would be the potential to put foster parents in a larger home to enable more placements and prevent young people ending up in care homes or being sent to foster carers outside of Buckinghamshire.
SimplerThere is currently widespread confusion about which council is responsible for which service. For example, nearly eight out of ten residents wrongly believe the County Council is responsible for rubbish collection, when in fact this is a district responsibility.
The new Buckinghamshire Council would be a one-stop shop for residents – one website and one telephone number to access all council services in the county, from benefits and planning applications to roads maintenance and social services
Bucks-wide organisations already in existenceMany other local public services and charities are already set up on a Buckinghamshire-wide scale and would find it simpler and cheaper to work with just one council based on the same geography. These include health organisations such as the federated Buckinghamshire Clinical Commissioning Groups (CCG) and Healthwatch Bucks; business and infrastructure groups including Buckinghamshire Business First, Buckinghamshire Thames Valley Local Enterprise Partnership and Buckinghamshire Advantage; and voluntary sector bodies such Community Impact Bucks and Heart of Bucks. Meanwhile, Amersham & Wycombe College and Aylesbury College have recently agreed to merge to create a further education college based on Buckinghamshire’s geography.
One voiceA single county-wide unitary council would speak with one voice for Buckinghamshire, strengthening our influence on the regional and national stage. |
PHPDeveloper.orghttp://www.phpdeveloper.org
Up-to-the Minute PHP News, views and communityen-usTue, 31 Mar 2015 15:49:27 -050030http://www.phpdeveloper.org/news/22232http://www.phpdeveloper.org/news/22232
On The PHPcc's site today Sebastian Bergmann, the creator of the popular PHPUnit unit testing framework, shows you how to move to using the tool's phar file and away from the previously used PEAR install method.
In April 2014 I announced that I would shut down pear.phpunit.de on December 31, 2014. The motivation behind this move was to simplify the release process of PHPUnit by getting rid of an outdated distribution channel. I was afraid that I would leave users of my software behind by this move. [...] I am relieved that the shutdown of pear.phpunit.de went as smooth as it did. [...] In this article I show you how to make the transition from using PHPUnit from a PEAR package to using PHPUnit from a PHP Archive or using Composer as easy and convenient as possible.
There's three main steps to the migration from PEAR to the Composer-based phar installation:
Uninstalling PEAR Packages
Using PHPUnit from a PHP Archive (PHAR)
Installing PHPUnit with Composer
He includes the commands and configuration files/settings you'll need to make the transition happen. He also mentions that older versions are still available if there's a need but only on GitHub/Packagist as phar packages, not via PEAR.
Link: http://thephp.cc/news/2015/01/phpunit-migration-from-pear-to-phar]]>Wed, 14 Jan 2015 13:48:34 -0600http://www.phpdeveloper.org/news/21838http://www.phpdeveloper.org/news/21838
In his latest post Phil Sturgeon talks about a project that's been running for a while, the The League of Extraordinary Packages and aims to clear up some recent misconceptions about the group and what they strive for in the projects they endorse.
This is the story of group of friends, who decided to write some code, but somehow confused and angered everyone with a keyboard. [...] Where should I release this code [I was super excited about releasing]? Should I release it with a vendor name of Sturgeon? That seemed rather egotistical. I could make something up, but what is the point of a single vendor with a single package? I wondered if any of my buddies were having this problem. [...] Being as hungover as I was, I thought long and hard, for about 5 seconds until something amazing happened in my brain... The PHP Super Best Friends Club! The guys loved it, and we started making plans immediately.
He goes on to talk about The League and some of the goals of the organization including the stated desire for quality code and a constant stream of work on the project (no abandoned or stale projects). He talks about how some of the rules for inclusion were created and some of the members of the various projects it includes. He then gets to the "recent misunderstanding" part of things with the clash of the League and the PHP-FIG (see here). He clears up some of the confusion in that thread by stating that:
League != PHPClasses
League != PEAR
He finishes off the post talking some about the leadership of the group (hint: it's an organization, not really run by a person or persons) and some of the work he's doing to ensure the future of the League and the packages it includes.
Link: https://philsturgeon.uk/blog/2014/10/what-is-the-league-of-extraordinary-packages]]>Thu, 16 Oct 2014 10:48:29 -0500http://www.phpdeveloper.org/news/21602http://www.phpdeveloper.org/news/21602
If you're a user of the Amazon AWS Web Services SDK software and are using the PEAR channel for installing the tool, you'll need to check out this new post to the AWS blog about its retirement.
There's been a noticeable wave of popular PHP projects recently announcing that they will no longer support PEAR as an installation method. Because the AWS SDK for PHP provides a PEAR channel, we've been very interested in the discussion in the community on PEAR channel support. PEAR has been one of the many ways to install the AWS SDK for PHP since 2010. While it's served us well, better alternatives for installing PHP packages are now available (i.e., Composer) and literally all of the PEAR dependencies of the AWS SDK for PHP are no longer providing updates to their PEAR channels.
He goes through several of the major dependencies the AWS SDK has (like Phirum, PHPUnit and Guzzle) and how they've announced the retirement of their own PEAR channels. Updates to the AWS SDK PEAR channel will cease on September 15th, 2014 but will still be available for downloads of older versions of the library. He also links to the location of the latest Phar and Zip archives if you'd like to use those.
Link: http://blogs.aws.amazon.com/php/post/TxFFMBZ80DA1OJ/End-of-Life-of-PEAR-Channel]]>Wed, 20 Aug 2014 11:14:18 -0500http://www.phpdeveloper.org/news/21432http://www.phpdeveloper.org/news/21432
The PEAR blog has posted a new announcement about the latest release of the PEAR PHP package manager, version 1.9.5.
The PEAR installer version 1.9.5 has been released today. The new version - three years after the last stable 1.9.4 and 2 weeks after the preview - is a bugfix only release. 13 bugs have been fixed.
Link: http://blog.pear.php.net/2014/07/12/pear-1-9-5/]]>Mon, 14 Jul 2014 11:09:24 -0500http://www.phpdeveloper.org/news/21219http://www.phpdeveloper.org/news/21219
In his latest post Hannes Magnusson describes his "dream" about a future for PHP where things like upgrading and working with extensions would be simpler, faster and more manageable.
Today we will revolutionize PHP. We will make it easier to upgrade the things you care about. We will make it easier to not upgrade things you don't want to upgrade. We will make it easier to distribute your extensions. We will make it easier to release according to your own schedule. We will make it easier to add functionality. We will make it easier to work. Ok, today is a white lie here maybe... I haven't actually implemented this, but bare with me here for a second.
With the introduction and huge growth of Composer, the PEAR package manager is fading in popularity and is slowly being abandoned. Unfortunately, it's still the primary mechanism for deploying and installing PHP extensions (PECL packages). He talks about some of his recent experience reviving a package and issues he had around the use of the packaging manager. He proposes the creation of a new "pecl install" tool - a package manager dedicated to PHP extensions, decoupled from PEAR.
The manager would just install basic PHP then leave it up to you to pick which features you need from there. The idea is still in its early stages, but the idea has taken roots and plans are being worked through to see if this idea will work for the future of the language.
Link: http://bjori.blogspot.com/2014/05/i-have-dream.html]]>Mon, 26 May 2014 09:23:54 -0500http://www.phpdeveloper.org/news/21128http://www.phpdeveloper.org/news/21128
Fabien Potencier has a new post to his site today talking about a recent trend in the PHP community around dependency and package management, the rise of Composer and the fall of PEAR.
As a good package manager to let user easily install plugin/bundles/MODs was probably also a big concern for phpBB, I talked to Nils about this topic during this 2011 hackday in San Francisco. After sharing my thoughts about libzypp, "..., I [Nils] wrote the first lines of what should become Composer a few months later". [...] So, what about PEAR? PEAR served the PHP community for many years, and I think it's time now to make it die.
He goes on to talk about how he personally has used PEAR in the past and when he stopped work on Phirum, a simplified PEAR channel manager. Based on some logging results, he found that most dependencies on his channels were related to PHPUnit's needs. When Sebastian Bergmann announced the move of PHPUnit away from PEAR Fabien decided to make his own move to deprecate and eventually remove new releases from the PEAR sources.
Link: http://fabien.potencier.org/article/72/the-rise-of-composer-and-the-fall-of-pear]]>Mon, 05 May 2014 09:17:32 -0500http://www.phpdeveloper.org/news/21074http://www.phpdeveloper.org/news/21074
There's a new addition to the GitHub wiki that's quite important for the PHPUnit users out there. Sebastian Bergmann has officially announced the end of life for the PEAR version of the installer for the popular PHPUnit tool.
Since PHPUnit 3.7, released in the fall of 2012, using the PEAR Installer was no longer the only installation method for PHPUnit. Today most users of PHPUnit prefer to use a PHP Archive (PHAR) of PHPUnit or Composer to download and install PHPUnit. Starting with PHPUnit 4.0 the PEAR package of PHPUnit was merely a distribution mechanism for the PHP Archive (PHAR) and many of PHPUnit's dependencies were no longer released as PEAR packages. Furthermore, the PEAR installation method has been removed from the documentation. We are taking the next step in retiring the PEAR installation method with today's release of PHPUnit 3.7.35 and PHPUnit 4.0.17.
Included in this end of life, they'll also be decommissioning pear.phpunit.de to happen no later than the end of 2014.
Link: https://github.com/sebastianbergmann/phpunit/wiki/End-of-Life-for-PEAR-Installation-Method]]>Mon, 21 Apr 2014 10:29:53 -0500http://www.phpdeveloper.org/news/20445http://www.phpdeveloper.org/news/20445
Ben Ramsey has an interesting post to his site today looking at what he calls the Fall of PEAR and the rise of Composer when it comes to package management in the PHP community.
PEAR's biggest selling-point -the curation of packages by a governed community - was also its biggest problem. There was no choice, and things moved slowly. If a package stagnated in development, I couldn't find another actively supported one to solve the same need. In theory, the maintenance of the package could be taken over by someone else, but this didn't always happen, and contributing patches was not clear or easy.
Ben talks about how, despite the PEAR development's best efforts, the proposed new package manager (Pyrus and PEAR2) couldn't keep up. Then, from a discussion had at a conference, the idea of a standards group was formed, the PHP-FIG, and the first standard soon followed, PSR-0 for autoloading. With this in hand and becoming widely adopted, a new tool was created to make it easier to share and install packages with this new standard - Composer.
Composer is what PEAR should have been. Through Packagist, Composer is the democratization of PHP userland libraries. Many libraries in the repository implement similar functionality, but through a show of popularity, the community self-selects the packages that are of the best quality. [...] In just a few short years, Composer has revitalized the PHP community and changed the way we do development.
Link: http://benramsey.com/blog/2013/11/the-fall-of-pear-and-the-rise-of-composer/]]>Wed, 27 Nov 2013 09:17:35 -0600http://www.phpdeveloper.org/news/20338http://www.phpdeveloper.org/news/20338
For those that have made the switch to OSX Mavericks and are wondering how to get PHP and MySQL into a working state, Rob Allen has posted a quick guide to getting it all set up.
With OS X 10.9 Mavericks, Apple chose to ship PHP 5.4.17. This is how to set it up from a clean install of Mavericks. Note: If you don't want to use the built-in PHP or want to use version 5.5, then these are [other] alternatives: a binary package from Liip, Zend Server and a Homebrew install.
He provides all the commands you'll need to get things up and running including checking file/directory permissions, installing MySQL and using the command line to work with Apache (no more "Web Sharing"). He also includes the configuration changes to be made to the php.ini including how to enable Xdebug. There's lots of other good things included in the guide as well like setting up Composer, PHPUnit and how to compile a few handy extensions.
Link: http://akrabat.com/computing/setting-up-php-mysql-on-os-x-mavericks/]]>Mon, 04 Nov 2013 09:52:25 -0600http://www.phpdeveloper.org/news/19410http://www.phpdeveloper.org/news/19410
Igor Wiedler has a recent post to his site about creating stateless services, specifically in the context of using a dependency injection container to manage the objects your application uses.
As more frameworks and libraries, particularly in the PHP world, move towards adopting the Dependency Injection pattern they are all faced with the problem of bootstrapping their application and constructing the object graph. In many cases this is solved by a Dependency Injection Container (DIC). Such a container manages the creation of all the things. The things it manages are services. Or are they?
He notes that, according to some of the principles of domain-driven design, "services" should be stateless - the results of calls to the service shouldn't alter it, it should only depend on the values passed in. He goes on to put this into the context of a DIC and gives an example of the "request service" (and how it violates the DDD principles of statelessness). He talks some about scopes (dependencies) and mutable services. He talks about methods to get around these issues with the "request" instance, ultimately coming to the conclusion that event listeners might be the way to go. |
Q:
Why complex signal has no imaginary spectrum
I am learning about complex sampling.
I am confused why $~e^{ j 2\\pi f~ n}~$ has only a real spectrum. I would have thought the $j ~\\sin(2 \\pi f n)$ would produce a single spike in imaginary spectrum just like there is a single spike in real axis from $\\cos(2 \\pi f n)$.
I understand that the spectrum is one sided because the negative complex exponentials cancel out, but why is there not a one sided real and imaginary spectrum?
Many thanks
A:
The imaginary part of the spectrum corresponds to the odd part of the time-domain sequence. Since the given signal is even, i.e.,
$$x[n]=e^{jn\\omega_0}=x^*[-n]=e^{-j(-n)\\omega_0}=e^{jn\\omega_0}$$
the imaginary part of the spectrum is zero, i.e., the spectrum is purely real-valued.
Note that for complex-valued signals, even in this context means $x[n]=x^*[-n]$, i.e., its real-part is even and its imaginary part is odd.
In sum, if a sequence $x[n]$ satisfies
$$x[n]=x^*[-n]$$
(i.e., the sequence is even and, consequently, its odd part is zero), then its DTFT is purely real-valued.
|
You are here
In My Backyard: Creationism in California
by Eugenie C Scott
Preface
In the spring 2005 issue of California Wild, the magazine of the California Academy of Sciences, NCSE's executive director Eugenie C. Scott, a Fellow of the Academy, discussed creationism in California, in a piece entitled "In My Backyard." A section of the article briefly described controversies over evolution education in the Roseville, California, schools over the last few years.
Subsequently, Larry Caldwell, a Roseville resident active in those controversies, filed suit against Scott and NCSE, alleging that "false statements about Caldwell in the Scott Article are defamatory per se, since they expose him to hatred, contempt, ridicule, or obloquy ...." Scott and NCSE obtained pro bono legal representation from Robert Mahnke and Warrington Parker III of the San Francisco office of Heller Ehrman LLP.
Caldwell also threatened lawsuits against California Wild and a number of people who quoted from or posted links to the article on California Wild's website. In response, California Wild removed the article from its website in June. Caldwell also has an ongoing suit against the Roseville Joint Union High School District for allegedly violating his civil rights during those controversies.
Scott corrected a small number of errors in a letter to the editor published in the summer 2005 issue of California Wild, including some of which Caldwell had complained. When that letter appeared, various creationism-promoting institutions accused NCSE of a "campaign of disinformation" and a "pattern of making false claims and character attacks," uncritically repeating Caldwell's allegations. On the contrary, NCSE contends that it has well earned its reputation as the most important and reliable source for information on the creationism/evolution controversy.
NCSE is pleased that California Wild has now posted a corrected version of "In My Backyard" on its website. To enable readers to decide for themselves whether the corrected errors were minor, as Scott contends, or outrageous, as Caldwell contends, NCSE is posting the following corrected version of the article with the changes indicated. We believe that the evidence speaks for itself.
Deletions are shown in strikeout and additions in green.
In My Backyard
Creationism in California
by Eugenie C. Scott
In November 2004 March 2002,
a school district in Cobb County, Georgia, pasted an antievolution disclaimer
into its biology textbooks. The disclaimer read, in part, "Evolution is theory,
not fact," meaning that evolution was speculation, rather than a foundational
idea of science.
Evolution, after all, is the idea that the universe has had
a history: that stars, galaxies, planets and living things have changed through
time, and that living things have a genealogical relationship. Although
scientists argue about the details of how evolution occurred, none argue over
whether evolution took place. That a school board felt it had to make an antievolution
gesture just seems so nineteenth century. Many Californians chalked up this
example of the persistent creationism/evolution controversy to the fact that it
happened in, well, Georgia. They were no doubt thinking, I'm glad this problem
is not in my backyard.
But alas, no. California has had its share of
creationism/evolution clashes too.
The state is in fact the home of the largest creationism
organization in the country, the Institute for Creation Research, based in
Santee, east of San Diego. And, lest northern Californians start feeling smug,
two of the leaders of the Intelligent Design (ID) creationism movement have
connections with the University of California, Berkeley. Retired Boalt Hall law
professor Phillip Johnson is a chief architect of the ID political and
rhetorical approach, and Jonathan Wells, author of the best-selling
antievolution screed Icons of Evolution,
received his PhD from the university's Department of Molecular and Cellular
Biology. ID up and comer Jed Macosko, now in the department of physics at Wake
Forest University, also did postgraduate work in Berkeley, where he taught a
class (which, gratefully, did not carry science credit) called "Evidence for
Design in Nature."
At the National Center for Science Education (NCSE), we
monitor the creationism/evolution controversy and provide information and
advice to those who want to keep evolution in classrooms and creationism out.
Over the years we have seen school board candidates run on creationist
platforms. We have seen textbooks declared to contain "too much evolution," or
rejected because they don't "balance" the teaching of evolution with teachings
from the Bible. We have had calls from teachers wondering what to do about the
instructor down the hall who refuses to teach evolution, or who brings personal
religious views into the classroom. And we have had calls from students
complaining about teachers openly proselytizing during class time.
Local school districts are where most curriculum decisions
are made. Because our center has had considerable experience advising school
boards and parents on creationism/evolution issues, we receive many calls about
school boards that want to limit the teaching of evolution in some way,
including passing "theory not fact" policies such as were recently the issue in
Georgia. Parents often pressure board members to add intelligent design to
curricula, while some ministers invited to school assemblies use the
opportunity to gain converts to creationism.
Charter schools, freed from some bureaucratic constraints,
sometimes try to stretch the science curriculum to include creationism.
Problems don't occur only at schools: informal science centers like zoos,
science museums, aquaria, and national parks are also sites where evolution
gets questioned. Visitors may protest evolution being presented without
qualifiers ("some scientists believe") or argue against the presentation of the
Earth's age as ancient.
These incidents occur across the country, not just in Bible
belt areas. They are more likely to arise in small towns and suburbs than large
urban settings: problems in California occur more frequently in places like
Hemet, Vista, Morgan Hill, San Juan Capistrano, Chester, and Weed, than in San
Diego or San Francisco. Small towns and suburbs are naturally more homogeneous.
If that homogeneity includes a sizeable degree of religious and political
conservatism, the environment is ripe for the eruption of a
creationism/evolution controversy. Battles are usually triggered by events such
as science textbook adoptions, the writing or revision of state science
education standards, and school board elections.
During the early decades of the 20th century, creationists
made it a crime to teach evolution. In 1925, the statute's legitimacy was
tested when John Scopes was tried in Dayton, Tennessee for defying the law.
Scopes lost, the laws stayed on the books, and publishers swiftly eliminated
evolution from high school textbooks. It returned in the 1960s thanks to a
movement to reform science education. In 1968 the Supreme Court struck down
antievolution laws – which weren't being enforced anyway – and the teaching of
evolution brought forth a new form of antievolutionism, "creation science."
Creation science proposes that the universe appeared all at
once in its present form a few thousand years ago, and that this biblical
literalist view is supported by scientific data. No substantial change in
astronomical or biological phenomena has taken place since then, they say. The
face of the earth was shaped by a real Noah's Flood, which deposited all the
sedimentary deposits in the world, carved the Grand Canyon, pushed up the
Rockies and the Himalayas, and gouged out the oceans.
In the late 1970s and early 80s, so-called "equal time"
legislation was introduced in at least 24 states – including California – that
would require the teaching of "creation science" if evolution were taught. The
argument was that if creationism could be made scientific, it deserved to be
taught in the public schools.
Conservative Christians, whose theology requires some degree
of biblical literalism, are the driving force behind American antievolutionism.
They make up a substantial number of Americans: polls estimate religious
conservatives comprise anywhere from 25 to 40 percent of the population.
However, the majority of American Christians belong to denominations rejecting
biblical literalism. Catholics, according to official doctrine, believe that
God created through evolution, while mainstream Protestants accept some
variants of this idea.
Because no empirical evidence supports such views, creation
science concentrates instead on the supposed shortcomings of evolutionary
science. Evolution didn't happen, they claim, because the second law of
thermodynamics supposedly prevents natural phenomena from becoming more complex
over time. This law is used to argue against the universe originating in the
"Big Bang," the evolution of complex life, and the development of biological
diversity. Gaps in the fossil record are regularly trotted out, while the
gradual transitions in the fossils of birds, whales, humans, and many other
animals are ignored. Natural selection, based on random variation of genetic
material and adaptive differential reproduction, is said to be too weak a
mechanism to account for complexity. Any argument against evolution is
considered evidence in support of creationism.
Creation science literature presents the teaching of both
creation science and evolution as good pedagogy. Teach the students both views,
and let them decide, they urge. But science is not a democratic process. All
theories are not created equal. Science, in fact, is highly discriminatory. It
discards explanations that don't work. The idea that everything appeared all at
one time in its present form was rejected as science even before Darwin. It is
not good pedagogy to teach students erroneous information: it wastes time, and
confuses students as to the scientific consensus.
The "fairness" argument has been extremely successful for
antievolutionists. Fairness and equal time deservedly are important American
cultural values, and most Americans respond favorably to them. Many citizens do
not realize that these otherwise valuable sentiments are irrelevant to
decisions about what to teach in the science classroom. If there were other
scientific theories explaining what evolution explains, scientists would be
teaching them.
Efforts to mandate the teaching of creation science were
brought to a halt by the Supreme Court's 1987 Edwards vs. Aguillard decision concerning a Louisiana equal time
law. The Court declared creation science to be a religious idea and that
advocating it would unconstitutionally promote religion in the public schools.
Creation science as a legal strategy was over, although creation science as a social
movement has continued to grow and spread.
Since then, a new strategy known as "Intelligent Design" has
come into being. It grew out of the Edwards
decision itself, which noted that it was legal to teach "scientific
alternatives to evolution." Proponents of ID proclaimed it to be one such
alternative.
Unlike creation
science, ID makes no fact claims about the origins of the universe, or the
history of Earth, or of life on Earth. Instead, it proposes that some things in
nature are too complex to have been formed from natural causes and therefore
must have been produced by "an intelligence." Some structures showing an
unexpectedly high level of organization (e.g., the first life forms, or
cellular structures such as the flagella of bacteria) are inferred to be too
complex for chance to have brought them about.
Of course, no evolutionary biologist ascribes the bacterial
flagellum or other complex structures to the chance assembly of parts: natural
selection is a mechanism that can generate complexity, and there may be other
mechanisms not yet discovered. This last brings up another problem with ID:
most scientists appreciate that we do not yet understand everything there is to
know about the natural world. But if a natural cause for something is not known
(indeed, there is no scientific consensus on the origin of life, or the
evolutionary assembly of the bacterial flagellum) it's not helpful to throw up
one's hands and say, "I don't know! God must have done it!" The scientific
approach would be to say, "I don't know, yet,"
and keep looking.
ID does not identify the "intelligent agent" and nothing is
said about how or when or with what this agent created life. This "creationism
lite" makes no claims about the origin of Grand Canyon by Noah's Flood, or a 10,000
year old Earth. This avoids immediate rejection by the scholarly community, and
accommodates a wide variety of antievolutionists, including biblical
literalist/young earth supporters as well as more moderate Christians. But most
ID literature merely asserts the failure of evolution to explain complexity,
and makes no attempt to provide an alternative model. It is a variant of the
creation science maxim that "evidence against evolution is evidence for
creationism."
In recent years, the main think tank of ID, the
Seattle-based Discovery Institute's Center for Science and Culture, has shifted
to advocating that "evidence against evolution," or EAE, be taught rather than
ID. It's a tacit admission that there is no evidence for their position. Perhaps ID proponents began to realize that
design implies a designer, an agent, and that judges would figure out pretty
quickly that the intended agent was God. Once proposals for teaching ID were
recognized as a back door way of teaching "God did it," the Center realized,
such policies would be declared unconstitutional. Better to convince students
that evolution didn't occur and let them conclude that the only reasonable
explanation left is creation by God.
The history of creationism has followed a pattern. First,
creationists attempted to ban evolution, then to teach creation science, next
to teach ID, and now (most commonly) they lobby to teach EAE. The
creationism/evolution controversy that occurred in the northern California
community of Roseville during 2004 is a microcosm of this history.
Roseville is a community of about 92,000 people about 20
miles from Sacramento. For several years a school board split between moderates
and conservatives has argued over evolution, sex education and other hot
educational issues. In 2001, one school board member proposed requiring the
teaching of creation science. In a letter to the community she wrote, "I
believe God has given us these scientists and this information at this time to
use for this exact purpose."
In June 2003, the Roseville district was choosing a textbook
for high school biology courses. One local citizen, Larry Caldwell, protested
that the book favored by teachers took a "one-sided" approach to teaching
evolution. Like all commercial textbooks, the Holt, Rinehart, and Winston
textbook includes evolution but no creationist or antievolution content.
Caldwell said that the textbook did not invite students to "think critically"
about the subject of evolution and he and other
citizens offered a stack of supplemental books and videotapes instructional
materials that would redress the book's deficiencies. These were an odd
mixture of ID and creation science:DVDs
including: a videotape promoted by the Discovery Institute; a
young-earth creationist book, Refuting
Evolution by Jonathan Safarti Sarfati;
and the Jehovah's Witness book Life: How
Did It Get Here? By Evolution or Creation? Thanks to its free distribution,
this book is probably the most widely-circulated creation science book in the
country. It is unknown who submitted the
creation science materials, while Caldwell submitted the video as well as
materials written by ID proponent Cornelius Hunter. Reportedly, the creation
science books were not considered further by the district.
District teachers strongly opposed these materials. The
board, even with a 4-1 antievolutionist majority, found it difficult to mandate promote their use over strong educator rejection, but
they persevered. At the next meeting, they declared that the creationist
materials would be "recommended" but not required, and that each school
could decide whether or not to use them the
submitted materials. This was to provide an opportunity for creationist
parents to lobby teachers and administrators. The board district office also
organized an "information session" for teachers on the supplementary materials
led by Caldwell and ID supporter Cornelius Hunter, a local engineer and author
of several religiously-oriented antievolution books and articles.
The polite but unconvinced teachers suggested the
supplementary materials be sent to scientists at the University of California,
Davis, California State University, Sacramento, and Brigham Young University
(one of the school board members is a Mormon) to review the materials and
Caldwell's analysis of the Holt textbook.
The scientists' report unanimously declared Caldwell's
supplementary materials unscientific. His Hunter's comments about evolution in
the textbook analysis did not express professional scientists' view of
evolution. One scientist wrote of Caldwell's
Hunter's "gross misunderstanding
of the nature of science." Another, in exasperation, wrote, "... consider that
the thousands of us who practice evolutionary biology daily might just not be
such blind fools as to miss the ‘flaws' that Hunter thinks are fatal to what we
do." The most "positive" comment from the scientists' critiques was that one of
the ID videos might have some educational value as "a tongue-in-cheek example
of weak argumentative strategy and pseudo science." The school district
administration agreed not to adopt the materials.
But the district administration's rejection was not the same
as rejection by its board of education. Caldwell filed a complaint against the
district and claimed that the adoption of the Holt textbook did not follow the
rules because parent input in the process was inadequate. He also proposed that
the board consider a policy he drew up, which he called the "Quality Science
Education" policy, which was an EAE approach couched in the language of critical
thinking. Quoting the California State Board of Education Policy on the
Teaching of Natural Sciences (1989), it read, in part,
…because "nothing in
science or in any other field of knowledge shall be taught dogmatically [and]
scientific theories are constantly subject to testing, modification, and
refutation as new evidence and new ideas emerge" teachers in the Roseville
Joint Union High School District are expected to help students analyze the
scientific strengths and weaknesses of existing scientific theories, including
the theory of evolution.
Months
passed while the board studied the issue, heard citizen commentary, and
repeatedly postponed the vote. Letters to the editors of regional newspapers
appeared in abundance. Citizens complained that the board was spending too much
time and money on creationism, and not addressing bread and butter issues such
as funding and class size. Twenty-eight of the 32 science teachers in the
district signed a petition against the policy and a sister proposal to set up
antievolution centers in libraries. The board, apparently exhausted from the
almost year-long struggle, voted 3:2 against instituting the policy.
In
the November 2004 elections, one of the anti-evolution incumbents was voted
out of did
not run for office re-election and
two new members were elected. The board shifted its focus to what they
considered more pressing issues. Creationism in Roseville seemed, finally, to
be a dead issue.
But
in January of 2005, Larry Caldwell sued the district and certain administrators
for not providing him due process. A district teacher sighed, "here we go
again."
Meanwhile,
back in Cobb County, Georgia, parents angry at the inclusion of the textbook
disclaimer sued and won the first round in Federal District court. The school
board has appealed the decision. And in Dover, Pennsylvania, parents sued their
school board over its policy requiring the teaching of ID and EAE (worded as
"gaps/problems in Darwin's theory").
Although
California is on the cutting edge of scientific research, proponents of
teaching creationism in the public schools are nonetheless banging on the
doors. Even in the Bay Area, we have small towns and suburbs with substantial
minorities of religious conservatives who do not like evolution. If a parent
asks a teacher, "you aren't going to teach evolution, are you?" the teacher may
decide — because the curriculum is overstuffed with topics anyway — that it is
easier to not get around to teaching evolution.
Antievolutionists
recently ran for school boards in Castro Valley and Modesto. California is not
immune to creationism and antievolutionism – it is in our backyard.
Eugenie C. Scott is Executive Director of
the National Center for Science Education, which actively supports the teaching
of evolution in schools and fights a constant battle against the dark side. |
Here are the Sacred Harp sings along the Front Range:
Monthly: second FridaysQuarterly: fifth Fridays (singing from Christian Harmony)Dates for 2018: March 7 & 28; April 4 & 18; May 2, 16 and 30; June 13 & 27Weekly: Every Monday except first Mondays: See Denver page for location change infoMonthly: first ThursdaysMonthly:third SundaysSuspended indefinitely as of December 2016 |
Investigating failure behavior and origins under supposed "shear bond" loading.
This study evaluated failure behavior when resin-composite cylinders bonded to dentin fractured under traditional "shear" testing. Failure was assessed by scaling of failure loads to changes in cylinder radii and fracture surface analysis. Three stress models were examined including failure by: bonded area; flat-on-cylinder contact; and, uniformly-loaded, cantilevered-beam. Nine 2-mm dentin occlusal dentin discs for each radii tested were embedded in resin and bonded to resin-composite cylinders; radii (mm)=0.79375; 1.5875; 2.38125; 3.175. Samples were "shear" tested at 1.0mm/min. Following testing, disks were finished with silicone carbide paper (240-600grit) to remove residual composite debris and tested again using different radii. Failure stresses were calculated for: "shear"; flat-on-cylinder contact; and, bending of a uniformly-loaded cantilevered beam. Stress equations and constants were evaluated for each model. Fracture-surface analysis was performed. Failure stresses calculated as flat-on-cylinder contact scaled best with its radii relationship. Stress equation constants were constant for failure from the outside surface of the loaded cylinders and not with the bonded surface area or cantilevered beam. Contact failure stresses were constant over all specimen sizes. Fractography reinforced that failures originated from loaded cylinder surface and were unrelated to the bonded surface area. "Shear bond" testing does not appear to test the bonded interface. Load/area "stress" calculations have no physical meaning. While failure is related to contact stresses, the mechanism(s) likely involve non-linear damage accumulation, which may only indirectly be influenced by the interface. |
Q:
javascript to raise popup window on third page accessed
After a person has toured my website for N pages, I would like to raise a popup window asking them if they would like to subscribe to my newsletter.
I've found sample code to raise a popup after a delay of some seconds, I've found samples for asking only once, but not one that can track the number of pages traversed.
Where can I find sample JS code to raise a window after a certain number of pages have been traversed?
My simple minded analysis is that normally each invocation of the script on N pages would be a different invocation, and hence would not have any record of the previous page's invocation. So each copy would have to read a cookie set by the previous copy, increment it, and store back. Then, when N=3 and whatever other conditions I think are appropriate are satisified, the popup is triggered.
A:
Your analysis is correct! You'll need cookies of some sort, whether tracked server side or just good old fashioned javascript cookies.
Here's the best rundown I've seen of how to implement them: Quirksmode Cookies
function createCookie(name,value,days) {
if (days) {
var date = new Date();
date.setTime(date.getTime()+(days*24*60*60*1000));
var expires = "; expires="+date.toGMTString();
}
else var expires = "";
document.cookie = name+"="+value+expires+"; path=/";
}
function readCookie(name) {
var nameEQ = name + "=";
var ca = document.cookie.split(';');
for(var i=0;i < ca.length;i++) {
var c = ca[i];
while (c.charAt(0)==' ') c = c.substring(1,c.length);
if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length);
}
return null;
}
function eraseCookie(name) {
createCookie(name,"",-1);
}
To update the value of the cookie to track page count - again assuming you aren't doing this server-side - you'll need to reset the cookie and set a new one with the new value. Updating/changing a value of an existing cookie just isn't really a supported operation of cookies.
Or you could just create 3 cookies, I guess. Whatever floats your integer ;)
|
Q:
Unable to access span tag's innerHTML in Javascript function
I am trying to create an email message (when a user clicks a link) whose body is pre-filled with text from an asp:Literal. The HTML is written as follows:
<tr>
<td>
<img src="../../images/Question.gif" alt="Q" />
Q:
<span id="question"><asp:Literal ID="literalQuestion" runat="server"></asp:Literal></span>
</td>
</tr>
<tr>
<td>
<img src="../../images/Answer.gif" alt="Q" />
A:
<span id="answer"><asp:Literal ID="literalAnswer" runat="server"></asp:Literal></span>
</td>
</tr>
<tr>
<td>
</td>
</tr>
<tr>
<td>
Click <a href="#" onclick="javaScript:emailWrongInfo()">Here</a> to email.
</td>
</tr>
I also have a JavaScript function that opens an email message when the user clicks the email link:
function emailWrongInfo() {
window.location="mailto:Test@test.com?Subject=Email%20Notification&Body=Question:%0A" + document.getElementById("question").innerHTML+ "%0A%0AAnswer:%0A" + document.getElementById("answer").innerHTML;
}
When I click the link, the new email message opens, but the question and answer are not filled in in the body portion. It seems like the span's text is not being pulled.
Any help would be greatly appreciated.
A:
Use HTMLElementObject.innerHTML instead of elementNode.textContent.
HTMLElementObject.innerHTML is used to get the inner HTML of an element While textContent property returns or sets the text from the selected element.
function emailWrongInfo() {
window.location="mailto:Test@test.com?Subject=Email%20Notification&Body=Question:%0A" + document.getElementById("question").innerHTML+ "%0A%0AAnswer:%0A" + document.getElementById("answer").innerHTML;
}
Hope, it'll solve your problem.
|
Effects of copper nanoparticles on the development of zebrafish embryos.
The environmental behavior and the potential toxicity of copper nanoparticles (nano-Cu) in water are major concerns for assessing their environmental safety. The present study was undertaken to characterize the properties of nano-Cu in E3 medium, such as size changes, solubility, zeta-potential and pH, and to test the toxicity of nano-Cu suspension to zebrafish embryos. Dynamic light scattering and solubility experiments showed that three components coexisted in the nano-Cu exposure system, including small nano-Cu aggregates still suspended in E3 medium, large nano-Cu aggregates deposited on the container bottom and dissolved copper species (Cu(dis)). Both the zeta-potential of nano-Cu particles in E3 medium and the pH of the nano-Cu suspension showed no change during a 24 hour period. It is found that nano-Cu retarded the hatching of zebrafish embryos and caused morphological malformation of the larvae, and high concentrations (>0.1 mg/L) of nano-Cu even killed the gastrula-stage zebrafish embryos. Cu2+ ions were used to study the toxicity caused by nano-Cu dissolution. The embryo toxicity of nano-Cu at 0.01 and 0.05 mg/L showed no significant difference from Cu2+ at the corresponding concentrations (0.006 and 0.03 mg/L), but 0.1 mg/L nano-Cu had a greater toxicity than 0.06 mg/L Cu2+. |
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 15 2018 10:31:50).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import <objc/NSObject.h>
#import <PassKitCore/NSCopying-Protocol.h>
#import <PassKitCore/NSSecureCoding-Protocol.h>
@class NSArray, NSDate, NSDecimalNumber, NSNumber, NSString, PKFelicaTransitAppletState;
@interface PKTransitAppletState : NSObject <NSCopying, NSSecureCoding>
{
_Bool _blacklisted;
_Bool _needsStationProcessing;
_Bool _appletStateDirty;
NSNumber *_historySequenceNumber;
NSNumber *_serverRefreshIdentifier;
NSDecimalNumber *_balance;
NSNumber *_loyaltyBalance;
NSString *_currency;
NSDate *_expirationDate;
NSArray *_balances;
NSArray *_enrouteTransitTypes;
}
+ (BOOL)supportsSecureCoding;
- (void).cxx_destruct;
@property(nonatomic) _Bool appletStateDirty; // @synthesize appletStateDirty=_appletStateDirty;
@property(nonatomic) _Bool needsStationProcessing; // @synthesize needsStationProcessing=_needsStationProcessing;
@property(copy, nonatomic) NSArray *enrouteTransitTypes; // @synthesize enrouteTransitTypes=_enrouteTransitTypes;
@property(copy, nonatomic) NSArray *balances; // @synthesize balances=_balances;
@property(copy, nonatomic) NSDate *expirationDate; // @synthesize expirationDate=_expirationDate;
@property(copy, nonatomic) NSString *currency; // @synthesize currency=_currency;
@property(copy, nonatomic) NSNumber *loyaltyBalance; // @synthesize loyaltyBalance=_loyaltyBalance;
@property(copy, nonatomic) NSDecimalNumber *balance; // @synthesize balance=_balance;
@property(copy, nonatomic) NSNumber *serverRefreshIdentifier; // @synthesize serverRefreshIdentifier=_serverRefreshIdentifier;
@property(copy, nonatomic) NSNumber *historySequenceNumber; // @synthesize historySequenceNumber=_historySequenceNumber;
@property(nonatomic, getter=isBlacklisted) _Bool blacklisted; // @synthesize blacklisted=_blacklisted;
- (void)addEnrouteTransitType:(id)arg1;
- (id)transitPassPropertiesWithPaymentApplication:(id)arg1;
- (void)_resolveTransactionsFromState:(id)arg1 toState:(id)arg2 withHistoryRecords:(id)arg3 concreteTransactions:(id *)arg4 ephemeralTransaction:(id *)arg5 balanceLabels:(id)arg6 unitDictionary:(id)arg7;
- (id)updatedEnrouteTransitTypesFromExistingTypes:(id)arg1 newTypes:(id)arg2;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4 balanceLabelDictionary:(id)arg5 unitDictionary:(id)arg6;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4 balanceLabelDictionary:(id)arg5;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3;
- (unsigned long long)hash;
- (BOOL)isEqual:(id)arg1;
@property(readonly, nonatomic, getter=isInStation) _Bool inStation; // @dynamic inStation;
- (id)copyWithZone:(struct _NSZone *)arg1;
- (void)encodeWithCoder:(id)arg1;
- (id)initWithCoder:(id)arg1;
@property(readonly, nonatomic) PKFelicaTransitAppletState *felicaState;
@end
|
In search of a big enough stick to tame the wild operating room inventory beast.
This article discusses the many aspects of a prime vendor relationship with a medical-surgical distributor. It outlines features or value-added services that should be included when a hospital embarks on a new prime vendor relationship or tries to improve an existing one. |
Search
No Country for Young Women has a very cool series going on about women working in architecture and construction–two fields that are wildly male-dominated. Sara Fox, an American property development consultant living in London, who was responsible for the construction of London’s iconic Gherkin building, had this to say:
Because construction is still, really and truly a male-dominated industry, the initial reaction is that as a woman I can’t possibly have any credibility. I can’t tell you the number of meetings I’ve attended where even though I was the most senior person present, because I was the only woman in the room, I was expected to serve coffee and tea. I have often found it’s just easier to offer, to save embarrassment (and a long wait). I can also remember attending numerous meetings with my almost all male project team, when firms who were tendering for work directed their presentation to one of the men. Those who arrived thinking I was just there to take notes of the proceedings were pretty soon disabused of that notion!
The “it’s just easier to offer” approach reminds me of so many conversations I’ve had with young women who ask how to stay vigilant about your feminist identity when sexism is so pervasive and defending against it so exhausting. My advice is imperfect, but works for me: fight when you have the energy to fight, use humor whenever it feels appropriate, and recognize that being “out” as a feminist in non-feminist friendly spaces is really critical, but you aren’t a bad feminist if you don’t have the energy to do it all the time.
This week, the Senate succeeded on partisan lines in passing a bill known widely as the “Republican Tax Scam,” a widely and unanimously decried piece of legislation that exploits the working class to expand the wealth of the top 5%, strips millions of people’s healthcare, and tanks the country’s economy while it’s at it, all at the service of the party’s wealthy donors.
The bill cleared the Senate floor after it was given the go-ahead by so-called “moderate” Republicans, one of whom was Senator Susan Collins, considered a ‘hero‘ by centrist Democrats after her vote to block Republican Obamacare repeal legislation. Senator Collins apparently ‘blasted’ coverage of her approach to the bill on ...
For the past few months, I’ve seen several articles — almost exclusively writtenbywhite women — arguing that we shouldn’t enforce Title IX protections for survivors of sexual assault because the authors believe Black men are more likely to be accused. The narrative has been picked up by numerous media outlets and used by Education Secretary Betsy DeVos to strip protections for survivors.
The idea that survivors’ rights are a threat to Black men leaves a bad taste in my mouth.
Let me be clear: that’s not because I’m not worried about race discrimination in school discipline. We have no data to support the argument that Black men are more likely to be accused of or ...
For the past few months, I’ve seen several articles — almost exclusively writtenbywhite women — arguing that we shouldn’t enforce Title IX protections for survivors of sexual assault because the authors ...
In law school, we spend a lot of time thinking about the “theory of the case”: what’s the problem, who’s the victim, who’s the villain. It turns out that how you define the problem directly informs the kind of solution that a judge, a lawmaker, or, say, the readers of the New York Times, are primed to accept.
The fast-moving national reckoning over sexual harassment in the workplace toppled another television news star on Wednesday . . .
The downfall of Mr. Lauer, a presence in American living rooms for more than 20 years, adds to a head-spinning string of prominent firings over sexual harassment and abuse allegations.
Here’s
In law school, we spend a lot of time thinking about the “theory of the case”: what’s the problem, who’s the victim, who’s the villain. It turns out that how you define the problem directly informs the kind of ...
Search
We need your help!
Get Our Newsletter
New posts and Feministing news delivered to your inbox weekly!
Want to write for us?
All Feministing posts are written by the site’s collective of regular columnists and editors. Though we don’t currently accept guest submissions, we have an open platform Community site to which anyone can contribute. We often promote our favorite Community posts on the main site. And Community bloggers who consistently impress us may to be invited to become regular Feministing columnists.. |
Q:
Two-dimensional mesh in fem: generating P1, P2, P3,... mesh from a P1 mesh
I have a two-dimensional mesh generated by triangle (the mesh generator software is not relevant). This software generates a perfect mesh for approximate the solution by piecewice linear functions (P1 mesh).
For example, for the unit square [0,1]x[0,1] in 2D I have a file with the coordinates of its nodes (for example, a mesh with 5 nodes):
1 0.0 0.0 # coordinates of node 1
2 1.0 0.0 # coordinates of node 2
3 1.0 1.0 # coordinates of node 3
4 0.0 1.0 # coordinates of node 4
5 0.5 0.5 # coordinates of node 5
called coordinate.dat, which have 4 elements (triangles) whose conectivity is given by the file called element.dat:
1 1 5 4 # vertices of triangle 1
2 1 2 5 # vertices of triangle 2
3 2 3 5 # vertices of triangle 3
4 5 2 4 # vertices of triangle 4
Now, I need to program problems where the finite element spaces belong to arbitrary polynomial degree, for example, approximate the solution by polynomials of degree 2, 3, etc.
To do this, I need to build a new mesh with new nodes and a new conectivity file with more nodes. An example of the general mesh that I need:
I already know how to calculate the coordinate of all new nodes on each element (it is just a convex combination easy to calculate) but I can't get a easy way to generate the new conectivity file.
I've been thinking for several days how to program a general method for this, but I could not come up with any.
Do you know some method to create meshes P2, P3, ... from a mesh P1?
or more precisely, How can I build the new conectivity file?
A:
The mesh for Pk elements is usually exactly the same as for P1. The only difference is that you also need to keep track of your edge indices for higher orders and associate degrees of freedom (DOFs) with edges (from P2 on) and elements (from P3 on). This is only a matter of defining local indices for nodes and edges with respect to your reference element, mapping to the actual mesh element and doing some index bookkeeping. Computing indices is quite straightforward if you figured that out and does not involve working with refined mesh data structures. The advantage of this technique is that you do not rely on a specific "placement" of DOFs on mesh nodes and generalizes easily to more general finite elements. I suggest you start over from scratch and consult the existing literature on the implementation of finite element methods. There is even a book with the title "Understanding and Implementing the Finite Element Method" by Gockenbach. While I have not read it myself, I am pretty sure that he explains these basics better than I ever could in this post. Other popular introductions usually also have a chapter on the implementation.
What you intended to do is however not a useless exercise. A popular approach for visualizing higher order polynomials is to use refined meshes, see also this question. In this case you need to deal with an interpolation to a refined mesh. Interpolation is easy if you know the vertices and refining is usually done recursively, i.e., you refine once, then again, etc. In this way it is easy to keep track of connectivities if you have a suitable index for your edges. Setting that up is a straightforward task since edges can be defined as a tuple of node indices which you can (again) define once and for all for your reference element and map to each of the actual mesh elements. However, since your question is related to setting up higher order elements, I will not elaborate on mesh refinement here.
|
LeBron James to become free agent
What's the Point of LeBron James Opting Out?
June 24 (Bloomberg) -- LeBron James will opt out of the final two years of his contract with the Miami Heat, making him a free agent after leading the franchise to four straight finals appearances, ESPN reported.
June 24 (Bloomberg) -- LeBron James will opt out of the final two years of his contract with the Miami Heat, making him a free agent after leading the franchise to four straight finals appearances, ESPN reported.
LeBron James has informed the Miami Heat he will opt out of the final two years of his contract, the team confirmed Tuesday. This places the NBA's best player into the open market on July 1 and makes 2014 look like 2010 all over again.
Four years ago is when high-profile free agents like James, Dwyane Wade, Chris Bosh, Joe Johnson, Amare Stoudemire, Carlos Boozer and David Lee flooded the market. Now, with James opting out after the Heat's disappointing Finals loss to the Spurs, the market is flooded again.
The Bulls plan to pursue Carmelo Anthony and are widely considered the favorite to land him should he choose to leave the Knicks, which is in question. As for James, the four-time most valuable player openly has talked about how impressed he was with the Bulls' pitch from 2010.
However, the Associated Press reported that James' decision to opt out does not indicate a desire to leave the Heat. In fact, most league observers view James' move, in which he is walking away from two years and $42.7 million, a power move to help -- or force, depending on perspective -- Heat President Pat Riley into making roster improvements.
If James does embark on a free-agent tour, expect the Bulls obviously to get involved. Their pitch in 2010 resonated strongly with James, particularly a discussion with coach Tom Thibodeau. In fact, sources at the time indicated to the Tribune that the Bulls were confident they almost landed James, despite a widespread public perception that he, Wade and Bosh were destined for the Heat all along.
The Bulls are scheduled to have only $11 million to $12 million of salary-cap space when they proceed with the imminent amnesty of Carlos Boozer. They would have to get creative and sell off pieces for nothing to create more cap space.
Heat President Pat Riley, who spoke at a news conference last week about what it might take to keep the core of the team together, issued this statement on James' decision:
“I was informed this morning of his intentions. We fully expected LeBron to opt-out and exercise his free agent rights, so this does not come as a surprise. As I said at the press conference last week, players have a right to free agency and when they have these opportunities, the right to explore their options. The last four seasons have been historic and LeBron James, Dwyane Wade, Chris Bosh and Erik Spoelstra have led the Miami HEAT to one of the most unprecedented runs in the history of the NBA. We look forward to sitting down with LeBron and his representatives and talking about our future together. At the moment, we are preparing for the opportunities in the Draft and Free Agency as we continue with our goal of winning NBA Championships.”
James and the Heat have advanced to the NBA Finals all four seasons he has played in Miami, winning twice.
A source close to LeBron James confirmed Tuesday to the Sun Sentinel that the forward has informed the Heat he has opted out of the final two years on the free-agent contract he signed with the team in July 2010.
The Clippers, like so many NBA teams, have had discussions about a contingency plan on how to acquire LeBron James after the All-Star forward informed the Miami Heat on Tuesday that he was opting out of his contract to become an unrestricted free agent on July 1. |
Castelsardo Cathedral
Castelsardo Cathedral () is a cathedral in Castelsardo, northern Sardinia, Italy, and is dedicated to Saint Anthony the Great. It became the seat of the bishop of Ampurias in 1503. In 1839 the diocese of Ampurias was merged into that of Tempio, and the episcopal seat moved to Tempio Cathedral, when that of Castelsardo became a co-cathedral, as it remains in the present diocese of Tempio-Ampurias.
Description
The current building dates from the reconstruction begun in 1597 that lasted until the 18th century. The cathedral is a mixture of Catalan Gothic and Renaissance elements, and overlooks the sea directly. The interior is on the Latin cross plan, with a single nave with barrel vaults, side chapels and transept. The crossing has a cross vault on four pilasters with sculpted capitals.
The church has a tall bell-tower, topped by a small dome decorated with majolica.
The presbytery is raised, and has a marble balustrade. The apse, with a cross vault decorated with stars, houses the marble high altar of 1810, characterized by the church's main attraction, the Enthroned Madonna and Child, a painting of the 15th century, attributed to the Master of Castelsardo. Also by the latter artist is a St. Michael the Archangel, displayed in the crypt, now home to the diocesan museum.
Category:Churches in the province of Sassari
Category:Roman Catholic cathedrals in Italy
Category:Gothic architecture in Sardinia
Category:Renaissance architecture in Sardinia
Category:Cathedrals in Sardinia |
Image caption
A cauldron was lit in Stoke Mandeville to mark the start of the London 2012 Paralympic torch relay
Buckinghamshire councillors have "unanimously agreed" a proposal to always light the torch at Stoke Mandeville for Paralympic Games.
The council will now urge the government to ask the International Paralympic Committee to agree.
It wants the Paralympic flame lit at the home of the Games every time the event is staged.
Council chair Marion Clayton said: "It is the birthplace of the Games and should be recognised as such."
She added: "It was good to see such cross-party enthusiasm for it."
'Recognised internationally'
The Paralympic Games - named as the parallel event to the Olympics - originated as a result of an archery competition organised by Dr Ludwig Guttmann for his spinal cord injury patients on the grass outside Stoke Mandeville Hospital.
He had opened the National Spinal Injuries Centre there in 1944 and introduced sport into his patient rehabilitation programme.
The doctor held the first Paralympic sports event in 1948 on the opening day of the London Games and developed Stoke Mandeville Stadium, the National Centre for Disability Sport, alongside the hospital.
Before the debate on the joint-party motion on Wednesday, Ms Clayton said that the hospital and stadium should be commemorated, just as Athens is for the Olympics.
"I think that Stoke Mandeville is now increasingly recognised internationally as the birthplace of the Paralympics and I do think that is something we should promote," she said. |
The cdx-hox pathway in hematopoietic stem cell formation from embryonic stem cells.
Embryonic stem cells (ESCs) differentiated in vitro will yield a multitude of hematopoietic derivatives, yet progenitors displaying true stem cell activity remain difficult to obtain. Possible causes are a biased differentiation to primitive yolk sac-type hematopoiesis, and a variety of developmental or functional deficiencies. Recent studies in the zebrafish have identified the caudal homeobox transcription factors (cdx1/4) and posterior hox genes (hoxa9a, hoxb7a) as key regulators for blood formation during embryonic development. Activation of Cdx and Hox genes during the in vitro differentiation of mouse ESCs followed by co-culture on supportive stromal cells generates ESC-derived hematopoietic stem cells (HSCs) capable of multilineage repopulation of lethally irradiated adult mice. We show here that brief pulses of ectopic Cdx4 or HoxB4 expression are sufficient to enhance hematopoiesis during ESC differentiation, presumably by acting as developmental switches to activate posterior Hox genes. Insights into the role of the Cdx-Hox gene pathway during embryonic hematopoietic development in the zebrafish have allowed us to improve the derivation of repopulating HSCs from murine ESCs. |
Nearest Transport Options
Nearest Airports
( Mile)
Hotel Features
Along with WiFi in public areas, this hotel has laundry facilities and tour/ticket assistance. Free breakfast and free self parking are also provided.
Room Amenities
All 47 rooms provide conveniences like refrigerators and microwaves, plus free wired Internet and TVs with cable channels. Other amenities available to guests include coffee makers, free local calls, and hair dryers.
Hotel Amenities
Hotel Amenities
Guests can enjoy a complimentary breakfast. High-speed wireless Internet access is available for a surcharge. This Richmond Hill hotel also offers tour/ticket assistance and laundry facilities. Onsite self parking is complimentary. |
Floyd Huddleston
Floyd Huddleston (August 19, 1918 - September 27, 1991) was an American songwriter, screenwriter, and television producer.
Career
Huddleston was born in Leland, Mississippi, and would later sing and write songs for Glenn Miller's Army Air Force Band during World War II. After he was discharged, Huddleston came to California where he was under contract with Decca Records in 1949. There, he co-wrote with Al Rinker an estimated 800 songs, some of which were recorded by Frank Sinatra, Judy Garland, and Sarah Vaughan. Soon after, Huddleston would compose lyrics for theater productions such as Shuffle Along and The New Ziegfeld Follies.
Later in his life he wrote lyrics for songs in several films, including The Ballad of Josie and Midnight Cowboy. For Disney, he contributed the song, "Everybody Wants to be a Cat", to The Aristocats. For Robin Hood, he and George Bruns were nominated for an Academy Award for "Love," sung by his wife, Nancy Adams. Huddleston would also produce unused songs for a proposed version of The Rescuers with songs performed by Louis Prima with Sam Butera and the Witnesses. In 1978, he not only produced and composed songs, but wrote the script for a 1978 TV special starring Lucille Ball.
Death
Huddleston died from a heart attack on September 27, 1991 at a hospital located in Panorama City, Los Angeles. Huddleston was survived by his wife Nancy Adams Huddleston, his son, Huston, and his mother, Hettye T. Huddleston. At the time of his death, Huddleston was working on a musical titled Brother Elwood's Gospel Truck.
Discography
With His Family Singers
Happy Birthday Jesus (Dobre Records DR1013, 1977)
References
External links
Category:1918 births
Category:1991 deaths
Category:American lyricists
Category:Songwriters from Mississippi
Category:20th-century American musicians
Category:People from Leland, Mississippi |
Q:
Is the use of NoSQL Databases impractical for large datasets where you need to search by content?
I've been learning about NoSQL Databases for a week now.
I really understand the advantages of NoSQL Databases and the many use cases they are great for.
But often people write their articles as if NoSQL could replace Relational Databases. And there is the point I can't get my head around:
NoSQL Databases are (often) key-value stores.
Of course it is possible to store everything into a key-value store (by encoding the data in JSON, XML, whatever), but the problem I see is that you need to get some amount of data that matches a specific criterion, in many use cases. In a NoSQL database you have only one criterion you can search for effectively - the key. Relational Databases are optimized to search for any value in the data row effectively.
So NoSQL Databases are not really a choice for persisting data that need to be searched by their content. Or have I misunderstood something?
An example:
You need to store user data for a webshop.
In a relational database you store every user as an row in the users table, with an ID, the name, his country, etc.
In a NoSQL Database you would store each user with his ID as key and all his data (encoded in JSON, etc.) as value.
So if you need to get all users from a specific country (for some reason the marketing guys need to know something about them), it's easy to do so in the Relational Database, but not very effective in the NoSQL Database, because you have to get every user, parse all the data and filter.
I don't say it's impossible, but it gets a lot more tricky and I guess not that effective if you want to search in the data of NoSQL entries.
You could create a key for each country that stores the keys of every user who lives in this country, and get the users of a specific country by getting all the keys which are deposited in the key for this country. But I think this techique makes a complex dataset even more complex - it's harder to implement and not as effective as querying an SQL Database. So I think it's not a way you would use in production. Or is it?
I'm not really sure if I misunderstood something or overlooked some concepts or best practices to handle such use cases. Maybe you could correct my statements and answer my questions.
A:
While I agree with your premise that NoSQL is not a panacea for all database woes, I think you misunderstand one key point.
In NoSQL database you have only one criterion you can search for effectively - the key.
This is clearly not true.
For example MongoDB supports indices. (from https://docs.mongodb.org/v3.0/core/indexes-introduction/)
Indexes support the efficient execution of queries in MongoDB. Without
indexes, MongoDB must perform a collection scan, i.e. scan every
document in a collection, to select those documents that match the
query statement. If an appropriate index exists for a query, MongoDB
can use the index to limit the number of documents it must inspect.
Indexes are special data structures [1] that store a small portion of
the collection’s data set in an easy to traverse form. The index
stores the value of a specific field or set of fields, ordered by the
value of the field. The ordering of the index entries supports
efficient equality matches and range-based query operations. In
addition, MongoDB can return sorted results by using the ordering in
the index.
As does couchbase (from http://docs.couchbase.com/admin/admin/Views/views-intro.html)
Couchbase views enable indexing and querying of data.
A view creates an index on the data according to the defined format
and structure. The view consists of specific fields and information
extracted from the objects in Couchbase.
In fact anything that calls itself a NoSQL database rather than a key-value store should realy support some kind of indexing schemes.
In fact, it is often the flexibility of these index schemes that makes NoSQL shine. In my opinion, the language used to define the NoSQL indices are often more expressive or natural than SQL, and since they usually live outside the table, you don't need to change your table schemas to support them. (Not to say you cant do similar things in SQL but to me it feels like there is a lot more hoop-jumping involved).
A:
Generally speaking, if your workflow is a perfect match for relational database queries, you'll find relational databases to be the most efficient approach. Its kind of tautological, but its true.
The claim that many NoSQL advocates would make is that many workflows were actually massaged into a relational form, and would have been more effective before such massaging. The validity of this claim is complicated to ascertain. Clearly there are jobs that are very well described by SQL queries. I can say from my experience that my particular relational programming tasks could have been done using NoSQL with nearly the same level of efficiency, if not more. However, that's a very subjective statement based on narrow experience.
I have a feeling much of the sale of the NoSQL approach comes from the assumption of large databases. The larger the database is, the more you must groom your workflow to support the larger datasets. NoSQL seems to be better at supporting that grooming effort. Thus, the larger the database, the more important NoSQL's features can potentially be.
To use the example, in SQL querying by country is just as slow as the NoSQL scan of all users, unless you explicitly told SQL to index the users table by country. NoSQL can do the same, where you create an ordered key-value collection that is the index (just like SQL does under the hood) and maintain it.
The difference? SQL engines had the concept of indexing the table built in. This means you got to do less work (all you had to do was add an index to the table). However, it also means you had less control. For most cases, that loss of control is acceptable, in exchange for the SQL engine doing the work for you. However, in massive datasets, you may want a different consistency model than the typical SQL ACID model. You may want to use the BASE model which supports eventual consistency. That could be very difficult in SQL, because the SQL engine is doing the work for you so it has to be done by the SQL engine's rules. In NoSQL, those layers are typically exposed, letting you hack at them.
A:
NoSQL is a rather vague term, since it basically covers all database systems which are not relational.
What you describe is a key-value store, which is a kind of database where a blob of data is stored under a key, and can be quickly looked up if you know the key. These databases are blazingly fast if you know the exact key, but as you say yourself, if you need to search or filter on multiple properties on the data, it will be slow and cumbersome.
Nobody in their right mind would claim that key-value stores can replace relational databases in general. However there may be particular use cases where are key-value store is a good fit. Key-value stores are often used for caching, since you typically cache items by id, but you don't need to perform ad-hoc queries over caches. For example the Stackoverflow site itself uses Redis (a key-value db) extensively, but only for output caching. The underlying canonical data is still persisted in a relational database.
So the answer is pretty obvious: Use a key-value store if you only need to store and lookup using a single key. Otherwise use a different kind of database. And if you are in doubt, use a relational database, since this is the most versatile kind of database, while the NoSQL databases are often optimized towards very particular use cases.
|
167 Mich. App. 611 (1988)
423 N.W.2d 289
ZIELINSKI
v.
SZOKOLA
Docket No. 94476.
Michigan Court of Appeals.
Decided April 4, 1988.
Morrison & Moss (by Gregory W. Finley), for plaintiffs.
Kohl, Secrest, Wardle, Lynch, Clark & Hampton (by Simeon R. Orlowski), for defendants Szokola.
Sullivan, Ward, Bone, Tyler, Fiott & Asher, P.C. (by William J. Leidel), for defendant City of Livonia.
*614 Before: J.H. GILLIS, P.J., and GRIBBS and J.C. TIMMS,[*] JJ.
J.C. TIMMS, J.
In this slip and fall case, plaintiffs appeal as of right from orders granting summary disposition in favor of the City of Livonia pursuant to MCR 2.116(C)(7) and (10) and in favor of defendants Szokola and Julius Barber Shop pursuant to MCR 2.116(C)(8) and (10). We affirm.
On Monday, March 1, 1982, at 8:00 A.M., plaintiff Anthony J. Zielinski had his hair cut at defendant Julius Szokola's barber shop by defendant Irving Szokola. Plaintiff was Szokola's first customer of the day. The weather was clear and cold with a light dusting of snow covering the ground. As plaintiff was walking to the parking lot after his haircut, he slipped and fell on the public sidewalk. He suffered a broken ankle which required surgery to correct. He was on crutches for four or five months and was off work for approximately seven months. As a result, he instituted the instant action.
At his discovery deposition, plaintiff testified that he did not see any defects in the sidewalk at the time of his fall. Upon closer examination, he did observe a two-foot-square patch of ice on the sidewalk under the snow.
In his discovery deposition, Szokola described the condition of the sidewalk as pitted. While there was no expert testimony regarding the cause of the pitting, Szokola believed it resulted from his salting of the sidewalk over the twenty-nine years he had been at his business location. He described the pit holes as being at a maximum of a couple inches in diameter and one-half-inch deep. He recognized that ice sometimes formed in the pits. According to him, at the time of plaintiff's accident, *615 ice had formed on the smooth part of the sidewalk as well as in the pitted portion. He attributed the ice to the melting of ice and snow from a nearby snowbank and refreezing of the resultant water. He testified that he had not shoveled, swept, nor salted the sidewalk for several days prior to plaintiff's fall.
I
The general rule with regard to the liability of a municipality or property owner for injuries sustained by a licensee as a result of icy conditions is stated in a doctrine known as the natural accumulation doctrine. The doctrine provides that neither a municipality nor a landowner has an obligation to a licensee to remove the natural accumulation of ice or snow from any location. Hampton v Master Products, Inc, 84 Mich App 767; 270 NW2d 514 (1978); Taylor v Saxton, 133 Mich App 302; 349 NW2d 165 (1984).
The natural accumulation doctrine is subject to two exceptions. The first exception provides that liability to a licensee may attach where the municipality or property owner has taken affirmative action to alter the natural accumulation of ice and snow and, in doing so, increases the hazard of travel for the public. Woodworth v Brenner, 69 Mich App 277; 244 NW2d 446 (1976); Mendyk v Michigan Employment Security Comm, 94 Mich App 425; 288 NW2d 643 (1979); Hampton, supra. To establish liability under this doctrine, a plaintiff must prove that the defendant's act of removing ice and snow introduced a new element of danger not previously present. Morton v Goldberg, 166 Mich App 366; 420 NW2d 207 (1988). Weider v Goldsmith, 353 Mich 339; 91 NW2d 283 (1958). Thus, for example, in Hampton, supra, the Village *616 of Yale was deemed liable for injuries suffered by a pedestrian who slipped and fell while trying to walk over a snowdrift that had resulted from road plowing undertaken by the village two days earlier. This Court correctly concluded that the snowbank was an unnatural accumulation.
Just as a brief aside, Mendyk, supra, represents an example in which the unnatural accumulation exception was carried to an extreme. There, the plaintiff slipped and fell on a public sidewalk abutting an MESC building and suffered injuries. The sidewalk had been twice shoveled and salted. The Court of Claims rejected the plaintiff's argument that she had fallen on an unnatural accumulation of ice and entered a verdict of no cause of action. This Court reversed and remanded for a new trial, holding that the MESC's act of salting the sidewalk could have caused an unnatural accumulation of ice. As the Court explained:
In the instant case the sidewalk abutting the MESC office had twice been shoveled and salted on the morning of plaintiff's slip and fall. It is the presence of this salt on the snow that plaintiff claims caused it to melt and freeze on the sidewalk. If this is true, then the action of the MESC in salting the sidewalk increased the plaintiff's hazard from one involving her trudging through snow to one involving her walking on ice. It is clear that her chances of injury from the latter are greater than her chances of injury from the former. [94 Mich App 435.]
While the panel in Mendyk did not grant a judgment in favor of the plaintiff, but instead only held that the issue of the MESC's liability should be submitted to the factfinder, the impact of the case is nonetheless staggering. Implicit in the Court's *617 ruling is the holding that all slip and fall cases caused by icy conditions resulting from salting should go to the factfinder. While liability may not always be found, the defendant must nonetheless suffer the cost of defending the action.[1]
Fortunately, this Court recently tempered the holding of Mendyk. In Morton, supra, this Court reversed an order denying the defendant drug store's motion for a directed verdict. This Court held that, in order to recover on a slip and fall case, the plaintiff had to prove more than just that defendant had salted earlier in the day and the plaintiff had slipped on ice. To recover, the plaintiff had to establish a causal connection between the defendant's actions and an increased hazard on the sidewalk. This Court felt that the defendant's actions in salting and clearing the snow decreased the danger to pedestrians. The Court politely ignored the holding in Mendyk, noting that the opinion did not compel a different result.
The second exception to the natural accumulation doctrine provides that liability may arise where a party takes affirmative steps to alter the condition of the sidewalk itself, which in turn causes an unnatural or artificial accumulation of ice on the sidewalk. Buffa v Dyck, 137 Mich App 679, 682-683; 358 NW2d 918 (1984).
*618 II
The natural accumulation doctrine applies only to injuries suffered by a licensee. It does not apply to situations involving an invitee injured on private property. With respect to an invitee, the landowner has an obligation to take reasonable measures within a reasonable time after the accumulation of snow to diminish the hazard of injury. Quinlivan v The Great Atlantic & Pacific Tea Co, Inc, 395 Mich 244; 235 NW2d 732 (1975). The specific standard of care, i.e., whether salt or sand should be used in addition to clearing the snow, is a fact question for the jury. Clink v Steiner, 162 Mich App 551; 413 NW2d 45 (1987).
In Clink, a newspaper delivery person was injured when he slipped and fell on the defendant's driveway at 4:00 A.M. while delivering the morning Detroit Free Press. While the driveway had been cleared of snow, the runoff from thawing snow had frozen on the driveway. The plaintiff sued and the circuit judge granted the defendant's motion for summary disposition, MCR 2.116(C)(10).
On appeal, this Court reversed, ruling that, although the defendant had cleared the driveway, a jury question existed as to whether the defendant should have used salt or sand in addition to shoveling. 162 Mich App 551, 556-557. In so ruling, the panel relied heavily on Lundy v Groty, 141 Mich App 757; 367 NW2d 448 (1985). There, an invitee, the defendant's cleaning person, slipped and fell on a driveway that had not been cleared of snow. It was uncontested that a snow storm had begun the previous night and that snow was still falling when the plaintiff arrived at noon.
We find it curious that the Clink panel, in holding that a jury question existed as to whether the defendant should have sanded or salted his *619 driveway in addition to having shoveled it, would rely so extensively on a case where the plaintiff slipped during a snow storm on a driveway that had not yet been cleared. While the holding in Clink might support the result in Lundy, the converse does not appear to be true. Such is the nature of the cases in this area of law.
The panel's holding in Lundy is curious in its own right. The panel reversed the circuit judge's order granting the defendant's motion for summary disposition. In doing so, it ruled:
In the instant case, defendant would owe plaintiff a duty because she should know that snow was falling on her property and that it would create a dangerous condition for the elderly plaintiff. The general standard of care would require defendant to shovel, salt, sand or otherwise remove the snow from the driveway.
... The specific standard of care in the instant case would be the reasonableness of defendant's actions regarding the snow. Whether it was reasonable to wait for the snow to stop falling before she shoveled or whether salt or sand should have been spread in the interim is a question for the jury. [141 Mich App 760-761.]
The Lundy holding is curious because the panel relied almost exclusively on Quinlivan, supra, where our Supreme Court merely held that liability might arise for a supermarket's failure to clean its parking lot for several days following a snowfall. We believe that this Court has strayed significantly from the reasonable holding in Quinlivan.[2]
*620 III
The instant case presents a hybrid situation between the natural accumulation doctrine and the higher standard of Quinlivan, as it involves a business invitee who was injured on a public sidewalk abutting the business. This precise fact circumstance was addressed in Elam v Marine, 116 Mich App 140; 321 NW2d 870 (1982), where the plaintiff sustained injuries after falling on a public sidewalk outside a business. This Court affirmed the grant of summary judgment, ruling that property owners have no obligation to maintain public sidewalks free from ice and snow even where the owner is a business invitor and the person injured is an invitee.[3] See also, Morton v Goldberg, supra.
We believe Elam and Morton are controlling in this case. Since plaintiff was injured on a public sidewalk, he can only recover if he can establish an exception to the natural accumulation doctrine.
IV
Relying on Mendyk, supra, plaintiff argues that the ice on which he slipped resulted from defendant's past salting practices. However, plaintiff has a new twist to his argument. He is not claiming that the ice formed from the refreezing of melted snow.[4] Instead, he claims the ice on which he slipped resulted from the accumulation of water in the pits in the sidewalk, which in turn were *621 formed by defendant Szokola's salting of the sidewalk.[5]Buffa v Dyck, supra.
The issue thus becomes twofold: (1) whether the accumulation of ice in one-half-inch-deep pit holes is a natural or unnatural accumulation and (2) whether one-half-inch-deep pit holes in a sidewalk constitute a failure on the part of the City of Livonia to maintain the sidewalk in reasonable repair so it is reasonably safe and convenient for public travel.
We conclude that the accumulation of ice in the pit holes of the sidewalk resulting from the refreezing of melted ice and snow is an accumulation from natural causes. We further hold that defendant Szokola would not face liability even had the ice accumulated as a result of the refreezing of a salted surface. We do not view the application of salt to an icy surface as the introduction of a new hazard. Weider, supra. Salting does not create a hazard, instead it only alleviates, albeit temporarily, a hazard that already existed. For this reason, liability should not attach merely because the powerful forces of nature reassert themselves and a salted surface refreezes.
We further hold as a matter of law that the City of Livonia was not under a duty to replace the sidewalk merely because ice accumulated in the one-half-inch-deep depressions that had formed. Any other holding would place an impossible burden on the already overburdened municipalities of this state. We do not now decide at which point a *622 municipality would face liability for depressions in which ice has accumulated, "for it is here where we enter into the realms of reasonable men differing, and of `tape measure justice.'" Ingram v Saginaw, 1 Mich App 36; 133 NW2d 224 (1965).
We note, however, that in a different context our Supreme Court has adopted a two-inch rule for injuries caused by holes in sidewalks. The rule states that, as a matter of law, a municipality is not liable for injuries to a pedestrian caused by a hole in a sidewalk that is less than two inches in depth. Ingram v Saginaw, 380 Mich 547; 158 NW2d 447 (1968). Presumably, the accumulation of ice in a hole exceeding two inches in depth would give rise to liability against the municipality. Thus, for example, in Pappas v Bay City, 17 Mich App 745; 170 NW2d 306 (1969), Judge HOLBROOK dissenting, this Court ruled that a depression in a sidewalk measuring 2 3/8 inches below the curb and in which water accumulated and froze might give rise to liability against the city. Today we hold only that, as a matter of law, the accumulation of ice in a depression measuring one-half-inch deep in a public sidewalk is not an unreasonably dangerous condition which would subject the municipality to the possibility of liability.
The grant of summary disposition pursuant to MCR 2.116(C)(10) in favor of both defendants is affirmed. Our disposition renders unnecessary the resolution of whether the trial judge's grants of summary disposition under MCR 2.116(C)(7) and (8) were improper. Defendants are entitled to costs as permitted in MCR 7.219.
Affirmed.
GRIBBS, J., concurs in the result only.
NOTES
[*] Circuit judge, sitting on the Court of Appeals by assignment.
[1] The holding of Mendyk is curious for two reasons. First, the case penalizes the homeowner or business person who uses salt. The lesson of the case is that the liability-conscious landowner or business person should never use salt to alleviate icy conditions, for salt melts snow which in turn refreezes and forms ice. And once salt is applied, the landowner or business person faces the fear that the ice that forms on the sidewalk arose from the refreezing of a salted surface rather than from natural thawing and refreezing. But see Morton v Goldberg, 166 Mich App 366; 420 NW2d 207 (1988). Second, the Mendyk panel incorrectly relies on the "invitor's rigorous duty of care owed to an invitee" as somehow being important to the case. This is nonsense. Since the plaintiff fell on public property, her status as an invitee is irrelevant. See Morton v Goldberg, supra; Elam v Marine, 116 Mich App 140; 321 NW2d 870 (1982).
[2] Apparently, the import of Clink and Lundy is that, during winter months, Michigan homeowners should cancel home delivery of their newspaper and should order the post office to halt home delivery of mail. Instead, the prudent homeowner should walk to the supermarket to buy a newspaper and to the post office for the mail so that those parties will face liability in case the homeowner falls.
[3] Interestingly, one of the cases cited to support this rule was Mendyk. Mendyk is factually similar to Elam and to the present case. Yet in Mendyk, the panel's holding was premised both on an unnatural accumulation of ice and on the higher duty owed by a business invitor to an invitee. Since the plaintiff in Mendyk slipped on a public sidewalk, we fail to see how her status as an invitee is pertinent.
[4] Defendant testified that he had not swept or salted the sidewalk prior to the accident since the sidewalk had been cleared of snow for several days prior to the accident.
[5] This case presents yet another reason why the prudent, liability-conscious homeowner or business person will never apply salt to an icy surface. If salt is applied, it may melt the snow and ice, resulting in an unnatural accumulation of ice, see Mendyk, supra, or it might damage the sidewalk, resulting in an unnatural accumulation, as in the present case. Either way, a lawsuit will have to be defended against. From the case law, one can only conclude that salt is akin to sin.
|
Coronavirus spike protein: a major viral determinant in interspecies transmission {#sec0005}
=================================================================================
Coronaviruses (CoVs) are large, enveloped, positive-sense, single-stranded RNA viruses that can infect both animals and humans [@bib0005]. The viruses are further subdivided, based on genotypic and serological characters, into four genera: *Alpha-, Beta-, Gamma-*, and *Deltacoronavirus* [@bib0010], [@bib0015]. Thus far, all identified CoVs that can infect humans belong to the first two genera. These include the alphacoronaviruses (alphaCoVs) hCoV-NL63 and hCoV-229E and the betacoronaviruses (betaCoVs) HCoV-OC43, HKU1, SARS-CoV, and MERS-CoV [@bib0005], [@bib0020], [@bib0025]. Special attention has been paid to betaCoVs, which have caused two unexpected coronaviral epidemics in the past decade [@bib0030]. In 2002--2003, SARS-CoV first emerged in China and swiftly spread to other parts of the world, leading to \\>8000 infection cases and ∼800 deaths [@bib0030]. In 2012, a novel CoV, named MERS-CoV, was identified in the Middle East [@bib0020], [@bib0025]. The virus managed to spread to multiple countries despite intense human interventions, causing 1110 infections and 422 related deaths as of 29 April 2015 (<http://www.who.int/csr/disease/coronavirus_infections/archive_updates/en/>). Both SARS-CoV and MERS-CoV are zoonotic pathogens originating from animals. They are believed to have been transmitted from a natural host, possibly originating from bats, to humans through some intermediate mammalian hosts [@bib0035], [@bib0040]. Thus, determining how these viruses evolved to cross species barriers and to infect humans is an active area of CoV research.
The key determinant of the host specificity of a CoV is the surface-located trimeric spike (S) glycoprotein, which can be further divided into an N-terminal S1 subunit and a membrane-embedded C-terminal S2 region [@bib0005]. S1 specializes in recognizing host-cell receptors and is normally more variable in sequence among different CoVs than is the S2 region [@bib0005], [@bib0045]. Two discrete domains that can fold independently are located in the S1 N- and C-terminal portions, both of which can be used for receptor engagement [@bib0050]. The N-terminal domain (NTD), functioning as the entity involved in receptor recognition, is exemplified by murine hepatitis virus (MHV), which utilizes carcinoembryonic antigen cell-adhesion molecules (CEACAMs) for cell entry [@bib0055], [@bib0060]. In most CoVs, however, the receptor-binding domain (RBD) is found in the S1 C-terminus [@bib0050], [@bib0065], [@bib0070], [@bib0075], [@bib0080], [@bib0085]. In such cases, the NTD might facilitate the initial attachment of the virus to the cell surface by recognizing specific sugar molecules [@bib0090], [@bib0095], [@bib0100], [@bib0105]. The S1--receptor interaction is therefore a key factor determining the tissue tropism and host range of CoVs.
Following receptor binding via S1, the CoV S2 functions to mediate fusion between the viral and the cellular membranes [@bib0005]. With characteristics of type I fusion proteins, CoV S2 normally contains multiple key components, including one or more fusion peptides and two conserved heptad repeats (HRs), driving membrane penetration and virus--cell fusion [@bib0005]. The fusion peptides are proposed to insert into, and perturb, the targeted membranes [@bib0110], [@bib0115]. The HRs can trimerize into a coiled-coil structure and drag the virus envelope and the host cell bilayer into close proximity, preparing for fusion to occur [@bib0120], [@bib0125], [@bib0130], [@bib0135], [@bib0140]. It is notable that the CoV S protein is commonly cleaved by host proteases to liberate S2 and the fusion peptides from the otherwise covalently-linked S1 subunit. This so-called priming process is highly dependent on the spatiotemporal patterns of the host enzymes, which is another key factor affecting cell tropism and the entry route of CoVs [@bib0145].
In this review, we first summarize the features of the S protein, the receptor-binding characteristics, the priming cleavage process, and the interspecies transmission mechanisms of SARS-CoV. Previous research on these topics has made SARS-CoV one of the best studied natural models of a viral disease emerging from zoonotic sources. Special attention will then be paid to MERS-CoV, focusing on the progress of the research made in the past several years regarding each of these items. We also retrospectively review several recent studies on bat coronaviruses (BatCoVs), which could implicate a zoonotic origin of MERS-CoV.
The SARS-CoV S glycoprotein, its cleavage priming and interaction with ACE2, and viral interspecies transmission {#sec0010}
================================================================================================================
SARS-CoV S is a 1255-residue glycoprotein; it is suggested to be cleaved either between R667 and S668 by trypsin, or between T678 and M679 by endosomal cathepsin L, into S1 and S2 subunits [@bib0150], [@bib0155], although the functional relevance of T678 in virus--cell fusion remains to be fully investigated. Several important modules in both S1 and S2 have been systematically characterized thus far ([Figure 1](#fig0005){ref-type="fig"}A,B). The SARS-CoV RBD is found in the C-terminal portion of S1, which spans ∼220 amino acids ([Figure 1](#fig0005){ref-type="fig"}A). It is composed of two subdomains: a core and an external subdomain [@bib0065]. The core has a center β-sheet composed of five antiparallel strands, which are further surrounded by the polypeptide loops connecting the strands and several surface helices, together forming a globular fold. The external region consists mainly of two small β-strands and a large interstrand loop and is located distally to the terminal side of the domain. A portion of the interstrand loop extends extensively over the surface of the core subdomain, and, together with the two β-strands, anchors the external region to the core like a clamp ([Figure 1](#fig0005){ref-type="fig"}B). It is interesting that one structure of the free SARS-CoV RBD unexpectedly revealed the possible dimerization of the protein through its terminal side [@bib0160]. The biological relevance of this structural observation, however, remains to be investigated. The authors suggest that RBD dimerization might cross-link S trimers on the viral surface, thereby affecting virus stability and infectivity. With systematic structural studies on SARS-CoV RBD, the structure of the SARS-CoV S NTD is still not known. It should be noted that this NTD, unlike its counterparts in bovine coronavirus (BCoV) or HCoV-OC43 [@bib0100], [@bib0105], cannot recognize sugar moieties on mucin [@bib0060].Figure 1Severe acute respiratory syndrome coronavirus (SARS-CoV) spike features. **(A)** Schematic representation of the SARS-CoV spike protein (S). The individual components of S that were either experimentally characterized in previous studies -- including receptor-binding domain (RBD), fusion peptide (FP), internal fusion peptide (IFP), heptad repeat 1/2 (HR1/2), and pretransmembrane domain (PTM) [@bib0065], [@bib0135], [@bib0175] -- or are based on bioinformatics analyses, for example, N-terminal domain (NTD), are marked with the boundary-residue numbers listed below. The S1/S2 cleavage sites and the S2'-recognition site are highlighted. Other abbreviations: SP, signal peptide; TM, transmembrane domain; and CP, cytoplasmic domain. **(B)** Atomic structures of SARS-CoV spike RBD, FP, IFP, HR1/HR2 complex, and PTM (from left to right). The crystal structures of RBD (core subdomain in green and external subdomain in magenta) and the six-helix bundle fusion core (consisting of three HR1/HR2 helical hairpins in green, cyan, and magenta, respectively) are shown as ribbons, while the solution NMR structures of FP, IFP, and PTM are contoured using the electrostatic surface. **(C)** The complex structure between SARS-CoV RBD and its receptor ACE2. The core and external subdomains of RBD and the N- and C-terminal lobes of ACE2 are colored green, magenta, cyan, and orange, respectively. **(D)** The amino acid interactions at the RBD--ACE2 interface. According to a previous study [@bib0065], this binding network involves at least 18 residues in the receptor and 14 residues in SARS-CoV RBD, which are listed and connected with solid lines. Black lines indicate van der Waals contacts, and red lines represent H-bond or salt-bridge interactions.
To enter host cells, SARS-CoV needs to first bind to the cell-surface receptor ACE2 [@bib0165] via the viral RBD [@bib0065]. ACE2 is a type I membrane glycoprotein and contains a large N-terminal ectodomain built of two α-helical lobes [@bib0065], [@bib0170]. The complex structure of SARS-CoV RBD bound to ACE2 revealed that the viral RBD utilizes its external subdomain to exclusively engage the N-terminal lobe of the receptor ([Figure 1](#fig0005){ref-type="fig"}C). Residues 424--494 (which are also referred to as the receptor-binding motif or RBM because they make all of the contacts with the receptor) in the RBD external region present an elongated and gently concave outer surface, cradling the most N-terminal helix in ACE2. In addition, the two ridges of this RBM further interact with the receptor by contacting the α2/α3 interhelical loops on one side and a β-hairpin and a small helix on the other [@bib0065]. The buried surface area upon complex formation is 927.8 Å^2^ in the SARS-CoV RBD and 884.7 Å^2^ in ACE2, respectively. The interface involves at least 18 residues in the receptor and 14 residues in RBD, forming a network of hydrophilic contacts that are suggested to predominate in the RBD/ACE2 interactions ([Figure 1](#fig0005){ref-type="fig"}D) [@bib0065].
After binding to ACE2, fusion between the SARS-CoV envelope and the host cell membrane is executed by the S2 subunit. Multiple fusion-related components in SARS-CoV S2 have been extensively studied thus far ([Figure 1](#fig0005){ref-type="fig"}A,B). These include the fusion core composed of HR1 and HR2 [@bib0135], [@bib0140] and at least three membranotropic regions that are denoted as the fusion peptide (FP), internal fusion peptide (IFP), and pretransmembrane domain (PTM), respectively [@bib0175]. The two HR modules are separately dispatched in S2 and are separated from each other by ∼200 residues. They form a coiled-coil structure built of three HR1--HR2 helical hairpins ([Figure 1](#fig0005){ref-type="fig"}B) [@bib0135], [@bib0140], presenting as a canonical six-helix bundle, as observed in other typical type I fusion proteins such as HIV gp41 [@bib0180] and Ebola GP [@bib0185]. The HR regions are further flanked by the three membranotropic components. Both FP and IFP are located upstream of HR1, spanning residues 770--788 and 873--888, respectively, while PTM is distally downstream of HR2 and directly precedes the transmembrane domain of SARS-CoV S. All of these three components are able to partition into the phospholipid bilayer to disturb membrane integrity [@bib0190], and their structural features have recently been elucidated [@bib0175]. FP assumes an α-helical conformation but shows significant distortion at its center. In contrast, IFP exhibits a straight α-helical structure. PTM assumes a helix--loop--helix fold. It should be noted that all three components can create a hydrophobic side-surface ([Figure 1](#fig0005){ref-type="fig"}B), explaining their bilayer-binding capacities [@bib0175]. The exact role of these putative fusion peptides in virus--cell fusion, however, remains to be fully examined; for example, it is currently unknown whether FP, IFP, and PTM function individually or in a synergistic manner. The evolutionary reservation of these hydrophobic amino acid sequences in SARS-CoV S highlights their potential participation in the viral entry process.
The priming process of SARS-CoV S by host proteases is likely one of the best characterized so far for viral envelope proteins. Indeed, the proteolytic activation mechanisms are summarized in several excellent reviews [@bib0145], [@bib0195], [@bib0200]. What has been astonishing is that this viral protein can be primed via a diverse array of proteases. Due to the lack of a furin-recognizable site, SARS-CoV S is largely uncleaved after biosynthesis [@bib0150]. It can be later processed by endosomal cathepsin L during entry, enabling SARS-CoV infection via the endocytosis pathway [@bib0205]. In addition, the viral S can also be activated by extracellular enzymes such as trypsin, thermolysin, and elastase, which are shown to induce syncytia formation and virus entry, possibly at the plasma surface [@bib0210]. Other proteases that are of potential biological relevance in potentiating SARS-CoV S include TMPRSS2, TMPRSS11a, and HAT [@bib0215], [@bib0220], [@bib0225], which are localized on the cell surface and are highly expressed in the human airway [@bib0230]. It is also noteworthy that TMPRSS2 can associate with ACE2 to form a receptor--protease complex, enabling efficient virus entry directly at the cell surface [@bib0235]. Echoing the important role of TMPRSS2 in SARS-CoV infection, a recent study further indicated that serine proteases (e.g., TMPRSS2) but not cysteine proteases (e.g., cathepsin L) are required for SARS-CoV spread *in vivo* [@bib0240]. Furthermore, TMPRSS2 as well as other host enzymes, such as HAT and ADAM17, are also indicated in the shedding of human ACE2 receptor, which, in turn, was shown to promote the uptake of virus particles [@bib0245], [@bib0250]. Remarkably, SARS-CoV S also contains an S2′ cleavage site downstream of the S1/S2 boundary [@bib0255], [@bib0260], [@bib0265]. This second cleavage event is believed to be crucial for the final activation of S, and the sequence directly C-terminal to S2′ displays characteristics of a viral-fusion peptide and plays an important role in mediating fusion [@bib0270]. It is still unknown how the cleavage of S at S1/S2 or S2′, the insertion of the fusion peptides into target membranes, and the assembly of HR regions are combined together as concerted events to complete membrane fusion (e.g., whether these events occur following specific spatiotemporal patterns). It should be noted that SARS-CoV FP, which spans residues 770--788, would be separated from the HR regions after proteolytic cleavage at S2′. This indicates a scenario of membrane fusion with chronological steps such that FP initially targets the host cell membranes to facilitate the following bilayer insertion of IFP, which remains conjugated with the HR regions after S2′ proteolysis. Such a scenario also highlights the importance of including multiple fusion peptides in SARS-CoV S for virus entry.
The interspecies transmission route of SARS-CoV is well established. Mounting evidence shows that the natural hosts of the virus are bats [@bib0275], [@bib0280], [@bib0285]. This notion was initially supported by the successful identification of SARS-like coronaviruses (SL-CoVs) in bats. Nevertheless, these viruses contain amino acid deletions in the S-RBM region and are unable to interact with human ACE2 [@bib0275], [@bib0280]. Recently, Ge *et al.* successfully isolated an infectious SL-CoV in Chinese horseshoe bats that shows far more sequence conservation in S to SARS-CoV than previously identified SL-CoVs do [@bib0280] and can recognize both bat and human ACE2 as the receptor [@bib0285], providing solid evidence for the bat origin of SARS-CoV. Palm civets and raccoon dogs were identified as the replication hosts for SARS-CoV [@bib0290], although it is still a matter of debate whether the virus is transmitted from bats to humans directly or via these intermediate animals. The ACE2 receptors of civets and raccoon dogs, however, can faithfully be recognized by SARS-CoV S [@bib0295], [@bib0300], [@bib0305]. Mouse ACE2 can also be utilized by SARS-CoV but with much less efficiency than the human receptor [@bib0310]. This is because the mouse receptor contains a Lys-to-His mutation at position 353 and is therefore devoid of a key hydrophilic interaction rendered by the lysine residue [@bib0065]. Rat ACE2 also harbors this K353H mutation. In addition, it has an extra glycosylation site at position 82. The linked carbohydrate moieties are proposed to sterically occlude binding of SARS-CoV RBD to the rat receptor [@bib0065]. In support of this, deletion of the glycan, together with the H353K substitution, restores RBD-binding to the rat receptor [@bib0315], [@bib0320]. In light of the inefficiency of SARS-CoV RBD in recognizing the mouse and rat receptors, it is unlikely that these two species are involved in the SARS-CoV zoonosis.
It is noteworthy that, of the 18 ACE2 residues interfacing with SARS-CoV RBD, multiple (≥7) amino acid substitutions are observed in the civet and raccoon receptors, in contrast to the receptors in other infection-permissive species \\[such as monkey (African green monkey), macaque, marmoset, hamster, and cat\\] (reviewed in [@bib0325]) that contain ≤4 mutations in the region ([Table 1](#tbl0005){ref-type="table"} ). Furthermore, ferret ACE2 (with nine substitutions relative to the human homologue) was mutated for half of the interface residues ([Table 1](#tbl0005){ref-type="table"}) but can still be recognized by SARS-CoV S [@bib0330]. These observations indicate plastic RBD/ACE2 interactions which can 'tolerate' relatively large variations in the receptor. The inability of ACE2 of a certain species functioning as the SARS-CoV receptor, therefore, likely arises from combinations of certain mutations. For example, the mutation incorporating a potential N-glycosylation site at N82 in conjugation with the K353H substitution in rat ACE2, but not a single M82N mutation as observed in hamster ACE2, abrogate the receptor\\'s binding capacity for SARS-CoV S. It is also notable that ACE2s of different bat species behave differently regarding serving as the receptor for SARS-CoV [@bib0295]. ACE2 of Chinese rufous horseshoe bat *Rhinolophus sinicus*, but not that of Pearson\\'s horseshoe bat *Rhinolophus pearsonii*, supports S-mediated SARS-CoV infection [@bib0295], although the receptor proteins of the two species both contain seven mutations in the RBD-interfacing region ([Table 1](#tbl0005){ref-type="table"}). The structural basis underlying this observed difference remains to be illustrated.Table 1Comparison among different species of the ACE2 residues interfacing with severe acute respiratory syndrome coronavirus (SARS-CoV) receptor-binding domain (RBD)[a](#tblfn0005){ref-type="table-fn"}Position\\
Species24273134373841424579828390325329330353354HumanQTKHEDYQLLMYNQENKGAfrican green monkeyQTKHEDYQLLMYNQENKGMacaqueQTKHEDYQLLMYNQENKGMarmosetQTKHEDHELLTYNQENKQHamsterQTKQEDYQLLNYNQENKGCatLTKHEEYQLLTYNQENKGCivetLTTYQEYQVLTYDQENKGRaccoonLTNNEEYQLQTYDQENKGFerretLTKYEEYQLHTYDEQNKRMouseNTNQEDYQLTSFTQANHGBat (*R. sinicus*)RTESENYQLLNYNENNKG
RatKSKQEDYQLINFNPTNHGBat (*R. pearsonii*)RTKHEDHELLDYNENNKD[^1]
The S adaptation for binding to the human receptor is also well recorded for SARS-CoV. Comparison of the RBD sequences of SARS-CoV isolated from humans and civets revealed six residue-substitutions [@bib0335], among which three (at positions 472, 479, and 487, respectively) belong to the 14-interfacing-residue list ([Figure 1](#fig0005){ref-type="fig"}D). K479N and S487T mutations have been reported in several studies [@bib0320], [@bib0340], [@bib0345] as the key changes in adapting SARS-CoV RBD for the human receptor. S protein with the civet-specific K479 and S487 residues can efficiently recognize civet ACE2 but interacts with human ACE2 much less efficiently [@bib0320]. Substitution of these two amino acids with the human-specific N479 and T487, either individually or in combination, dramatically increases the affinity of S for the human receptor [@bib0320], [@bib0340]. This increased binding affinity is believed to be related to the elimination of unfavorable free charges at the interface upon mutation [@bib0350] and the extra contacts established by the methyl group of T487 [@bib0355]. Residue changes at other positions in the RBM might also be related to the SARS-CoV adaption. For instance, a virus bearing the civet S with the K479N mutation was passaged on human airway epithelial cells. Adaptive substitution occurred at residues 442 and 472, rather than at the 487 site identified in the epidemic strains [@bib0345]. The changes in SARS-CoV S required for interspecies transmission are also exemplified in two independent studies on mouse-adapted viruses. Two groups identified the same S-substitution at position 436, which is believed to be directly linked to the enhanced infectivity and pathogenesis in the murine host [@bib0360], [@bib0365].
MERS-CoV S, its cleavage priming and interaction with CD26, and viral interspecies transmission {#sec0015}
===============================================================================================
MERS-CoV S is composed of 1353 residues and displays a remarkably similar domain arrangement to its SARS-CoV homologue ([Figure 2](#fig0010){ref-type="fig"}A), although the overall sequence identity between the two viral proteins is rather limited. However, unlike SARS-CoV S, the MERS-CoV S protein can be readily processed into S1 and S2 subunits upon expression [@bib0370], [@bib0375], [@bib0380]. In S1, the receptor-recognizing RBD is localized to the C-terminal portion, spanning ∼240 residues [@bib0080], [@bib0085], [@bib0385]. These amino acids fold into a structure consisting of two subdomains, as reported in the SARS-CoV equivalent. The core subdomain presents remarkable similarities to that of the SARS-CoV RBD, but the external subdomain is structurally distinct from the SARS-CoV RBD external region and comprises mainly four antiparallel β-strands ([Figure 2](#fig0010){ref-type="fig"}B). In S2, the HR regions are also well studied [@bib0130], [@bib0390]. As expected, the HR1 and HR2 of MERS-CoV also form an intra-hairpin helical structure that can trimerically assemble into a six-helix bundle ([Figure 2](#fig0010){ref-type="fig"}B), demonstrating a canonical membrane-fusion mechanism as reported for other type I fusion proteins [@bib0120]. These studies provide insight into the characteristics of MERS-CoV S. Nevertheless, other S-components of this novel CoV remain largely uninvestigated. For example, it is still unknown whether the RBD-preceding NTD of MERS-CoV S1 might similarly fold into a galectin-like structure (as in MHV [@bib0060]) and function to facilitate the initial viral attachment to the cell surface by recognizing certain sugar molecules (as in BCoV and HCoV-OC43 [@bib0100], [@bib0105]). In addition, the S2 fusion peptides of MERS-CoV must also be experimentally investigated, although similar concentration of hydrophobic residues to the SARS-CoV FP, IFP, and PTM can be individually identified in the equivalent regions of MERS-CoV S ([Figure 2](#fig0010){ref-type="fig"}B).Figure 2Middle East respiratory syndrome coronavirus (MERS-CoV) spike features. **(A)** Schematic representation of the MERS-CoV spike protein. The boundaries for the individual components, as well as the S1/S2 and S2' cleavage sites, are marked. Abbreviations: SP, signal peptide; NTD, N-terminal domain; RBD, receptor-binding domain; FP, fusion peptide; IFP, internal fusion peptide; HR1/2, heptad repeat 1/2; PTM, pre-transmembrane domain; TM, transmembrane domain; and CP, cytoplasmic domain. Question marks highlight the fusion peptides (FP, IFP, and PTM) of MERS-CoV that still await structural and functional characterization. **(B)** Crystal structures of the MERS-CoV spike RBD and HR1/HR2 fusion core. Left panel: the RBD structure with its core subdomain highlighted in green and external subdomain in magenta. Middle-left panel: a structural superimposition between MERS-CoV RBD (core and external subdomains in green and magenta, respectively) and severe acute respiratory syndrome coronavirus (SARS-CoV) RBD (in gray). Middle-right panel: the fusion core structure with the three HR1/HR2 chains in green, cyan, and magenta, respectively. Right panel: sequence comparison between SARS-CoV and MERS-CoV highlighting the spike regions of SARS-CoV FP, IFP, and PTM, respectively. Important hydrophobic residues are marked in boxes. **(C)** The complex structure between MERS-CoV RBD and the receptor CD26/DPP4. MERS-CoV RBD is colored as in panel (B), and the receptor is highlighted in cyan for the β-propeller domain and in orange for the α/β-hydrolase domain, respectively. The inter-blade helix referred to in the text is marked. **(D)** Atomic binding-network between MERS-CoV RBD and CD26 [@bib0080]. The RBD--CD26 interface includes 13 amino acids from the receptor and 18 residues from the virus RBD, which are individually connected with either black lines, for van der Waals contacts, or red lines, for H-bond or salt-bridge interactions. The CD26 residue N229 contributes to the RBD-binding via its linked sugar moieties rather than directly engaging RBD, and is therefore highlighted in yellow.
MERS-CoV initiates human infection by first specifically interacting with its receptor CD26 (also known as dipeptidyl peptidase 4 or DPP4) [@bib0395]. CD26 is a membrane-bound peptidase with a type II topology and can form homodimers on the cell surface [@bib0400], [@bib0405], [@bib0410]. Its ectodomain structurally comprises two domains, an α/β-hydrolase domain and an eight-bladed β-propeller [@bib0405], [@bib0410]. The MERS-CoV RBD specifically recognizes, via its external subdomain, the β-propeller of the receptor for engagement ([Figure 2](#fig0010){ref-type="fig"}C) [@bib0080], [@bib0085]. The four external β-strands of the RBD create a relatively flat surface to interact with the propeller blades IV and V. Large surface areas of 1203.4 Å^2^ in CD26 and 1113.4 Å^2^ in MERS-CoV RBD are buried to form an extended binding interface [@bib0080], in which 13 residues of the receptor and 18 amino acids of the RBD play important roles in the binding by providing either H-bond/salt-bridge interactions or multiple van-der-Waals contacts ([Figure 2](#fig0010){ref-type="fig"}D). Among these, a strong network of hydrophilic contacts is created mainly with the interface-residue side-chains. In addition, a small hydrophobic depression in RBD further cradles the bulged inter-blade helix in the receptor, which presents several apolar side-chains ([Figure 2](#fig0010){ref-type="fig"}C). Finally, the RBD and CD26 binding also involves a receptor-linked carbohydrate entity interacting with several solvent-exposed residues in the RBD ([Figure 2](#fig0010){ref-type="fig"}D), drawing parallels between MERS-CoV and the alphaCoV porcine respiratory coronavirus. The latter also recognizes a sugar component in the receptor [@bib0075]. What has been unexpected regarding the MERS-CoV binding to CD26 is its competitive interference with the interaction between CD26 and adenosine deaminase (ADA), which has been suggested to deliver an important costimulatory signal in immune activation [@bib0400]. A majority of the CD26 residues interfacing with MERS-CoV RBD are also shown to engage ADA [@bib0080], [@bib0085], [@bib0415].
The host proteases involved in the priming of MERS-CoV S have also been broadly studied thus far. A pioneering study demonstrated that MERS-CoV S, unlike its SARS-CoV counterpart, can be efficiently cleaved after biosynthesis in HEK-293T cells [@bib0370]. It was recently demonstrated that the cleavage occurs at R751/S752, separating S into S1 and S2 subunits by furin [@bib0380]. In addition, a second furin cleavage site (S2′) was identified in S2, upstream of the putative fusion peptide that likely corresponds to SARS-CoV IFP, between R887 and S888 ([Figure 2](#fig0010){ref-type="fig"}A) [@bib0380]. With mounting evidence showing that processing at S2′ is an essential determinant of the intracellular site of fusion [@bib0420], a two-step activation mechanism for MERS-CoV entry [@bib0380] has been proposed such that the former cleavage occurs between S1 and S2 during the secretion of S protein in the endoplasmic reticulum (ER)-Golgi compartments, where furin is localized, and the latter at S2′ during virus entry into target cells. The other reported proteases involved in MERS-CoV S-activation include TMPRSS2 [@bib0370], [@bib0425], TMPRSS4 [@bib0430], and endosomal cathepsin B and/or L [@bib0370], [@bib0425]. It is noteworthy that MERS-CoV, similar to SARS-CoV, might use different activation pathways for cell entry depending on the spatiotemporal patterns of the host priming enzymes [@bib0435]. For example, the presence of TMPRSS2 or trypsin treatment can bypass the endosomal entry pathway to initiate membrane fusion at the cell surface [@bib0425], [@bib0435].
The cross-species transmission route of MERS-CoV remains not well known. Nevertheless, mounting evidence indicates that the virus is a zoonotic pathogen which likely originated first in bats and was then transmitted to other animals (dromedary). Despite several studies documenting the interhuman transmission of MERS-CoV [@bib0440], [@bib0445], a large portion of the cases of infection cannot be directly linked to contacts with index patients. The genome diversity of human MERS-CoV isolates is highly suggestive of human infections from several independent zoonotic events from animal reservoirs [@bib0450], [@bib0455]. The dromedary camel has thus far been well documented as an intermediate host. Both MERS-CoV-specific antibodies and RNAs can be detected in dromedary sera and milk [@bib0460], [@bib0465], [@bib0470], and live viruses were recently isolated from infected camels [@bib0475]. Additional direct evidence of dromedary-to-human transmission comes from the isolation of MERS-CoVs with almost identical genomic sequences from patients and from their breeding dromedaries [@bib0480], [@bib0485]. Viral gene fragments identical or quite similar to those of MERS-CoV have also been recovered in bats [@bib0490], [@bib0495], [@bib0500], raising again the possibility that the bat acts as the natural reservoir of MERS-CoV. An evolutionary analysis of bat CD26 genes indicates a long-term arms race between bats and MERS-related CoVs, suggesting that MERS-CoV ancestors circulated in bats for a substantial period of time [@bib0505]. It is also interesting to note that a recent study indicates that MERS-CoV may have jumped from bats to camels up to 20 years ago in Africa, with the camels then being imported into the Arabian peninsula [@bib0510].
Multiple cells (primary or cell lines) derived from different species have been investigated for susceptibility to MERS-CoV infection. The results show that cells of rhesus macaque, marmoset, goat, horse, rabbit, pig, civet, camel, and bat -- but not of mouse, hamster, and ferret -- are permissive to MERS-CoV replication [@bib0435], [@bib0515], [@bib0520], [@bib0525], [@bib0530], [@bib0535], [@bib0540], [@bib0545], [@bib0550]. By focusing on the list of the 13 residues that were identified as key interface amino acids in the receptor, it is noteworthy that the receptor in species of the permissive group is either identical to the human receptor or varies from it by only one or two residues, whereas the receptor of species in the resistant group is more variant, showing multiple (≥5) substitutions ([Table 2](#tbl0010){ref-type="table"} ). The inability of MERS-CoV to infect mouse, hamster, and ferret should therefore be attributed to the inability of the virus to recognize the CD26s of these species, which contain too many mutations in the RBD-binding region. In support of this, expression of hamster CD26 whose variant residues are substituted with the equivalent human amino acids in otherwise nonpermissive baby hamster kidney (BHK) cells restores the viral infection by MERS-CoV [@bib0545]. These results demonstrate that the binding capacity by MERS-CoV RBD is a key factor determining the host susceptibility to MERS-CoV infection. It has yet to be determined whether dog and cat, which clearly belong to the second group, are resistant to the virus. It would be of more interest to investigate the 13-residue list in the future for the amino acid combinations that are least required for interaction with MERS-CoV RBD.Table 2Comparison among different species of the CD26 residues interfacing with Middle East respiratory syndrome coronavirus (MERS-CoV) receptor-binding domain (RBD)[a](#tblfn0010){ref-type="table-fn"}Position\\
Species229267286288291294295317322336341344346HumanNKQTALIRYRVQIMacaqueNKQTALIRYRVQIMarmosetNKQTALIRYRVQICattleNKQVGLIRYRVQIHorseNKQTALIRYRVQIGoatNKQVGLIRYRVQIPigNKQVALIRYRVQICamelNKQVALIRYRVQISheepNKQVGLIRYRVQIRabbitNRQTALIRYRVQIBat (*Pipistrellus*)NKQTALTRYKVQI
CatNKETALTRYKAEIDogNKESLLTRY--SKIFerretNKETDSTRYSEETHamsterNKQTELTRYTLQVRatNKQTATTRYVTEIMouseNKQPAARRYTSQV[^2]
It should also be noted that sheep and bovine CD26s contain the same two residue-variances as goat and are shown to mediate MERS-CoV infection of BHK cells upon expression [@bib0545]. Nevertheless, another study demonstrated that cells derived from sheep and cattle are resistant to MERS-CoV [@bib0530], and accordingly, no MERS-CoV-specific antibodies were detected in the sera of 80 tested cattle and 40 sheep in an epidemiologic survey [@bib0465]. The discrepancy in these results might reflect the difference in the priming-protease system between sheep/cattle cells and BHK cells. Although MERS-CoV can recognize sheep/cattle CD26, the lack of appropriate proteases for S-activation would incapacitate the membrane fusion and the subsequent virus entry. The hamster-derived BHK cells, on the other hand, are able to prime MERS-CoV S and therefore become infection-permissive after gaining the capacity to interact with MERS-CoV RBD. A similar scenario is also observed in mice, which can be effectively infected by MERS-CoV after ectopic expression of human CD26 in the animal [@bib0555]. Characterization in different species of the spatiotemporal patterns of the enzymes that prime MERS-CoV S represents an interesting and as-yet-unresolved issue.
The changes in S related to MERS-CoV interspecies adaptation are thus far unknown. Several genetic analyses were recently conducted to characterize the evolutionary status of the virus since its identification in 2012. The results show that the MERS-CoV RBD has largely remained unchanged in sequence in the circulating viruses. In a study focusing on the human MERS-CoV strains, the authors demonstrate that only one codon of spike residue 1020 (located in S2) is under strong positive selection, despite the fact that the overall evolutionary rate of the virus is estimated to be 1.12 × 10^−3^ substitutions per site per year [@bib0560]. Several substitutions have also been detected in the S-RBM region of some MERS-CoV strains, including those at positions 482, 506, 509, and 534. Among these, only L506 plays an important role in CD26 binding ([Figure 2](#fig0010){ref-type="fig"}D). The identified L506F mutation, however, reduces the receptor-binding capacity and thereby impairs viral fitness [@bib0565]. It should be noted that artificial selection of escape mutants with MERS-CoV RBD-specific antibodies can lead to the same L506F substitution [@bib0565], raising the possibility that the naturally occurring residue change at this position is the consequence of host immune pressure rather than a result of evolution for a better affinity to CD26. Accordingly, none of the identified S-changes are observed in multiple genomes [@bib0560]. A second study analyzed the MERS-CoV sequences of the dromedary isolates and identified only the A520S substitution in the RBD [@bib0570]. Although this residue is located in the external subdomain, it does not directly contact the receptor. Therefore, it remains to be investigated whether any residue substitutions in the RBD occur naturally and can facilitate cross-species transmission of MERS-CoV by increasing the S affinity for human CD26. The current data indicate that the combination of the 18 RBD amino acids listed in [Figure 2](#fig0010){ref-type="fig"}D remains dominant in the circulating strains, both in humans and dromedaries. This seems to favor the notion that the present MERS-CoV RBM sequence represents one of the best CD26-interacting candidates. Residues that are determinant for MERS-CoV S preference for binding to CD26 of a certain species still await identification.
BatCoV HKU4 S protein interaction with CD26 and its implication for the bat origin of MERS-CoV {#sec0020}
==============================================================================================
A large number of coronaviruses have been recorded as having origins in bats (at least for their genomes) [@bib0575]. However, their public health relevance and/or evolutionary relatedness to the known human-infecting coronaviruses remain to be examined. BatCoVs HKU4 and HKU5 have recently drawn increasing attention due to their close phylogenetic relationship to MERS-CoV [@bib0580]. These CoVs were first identified as genomic sequences in 2005 in lesser bamboo bats and Japanese pipistrelles, respectively [@bib0585]. Though isolation of the infectious viruses has thus far been unsuccessful, mounting evidence indicates that these two viruses are still circulating in bats [@bib0590]. Recently, Yang *et al.* [@bib0595] and our group [@bib0375] concomitantly showed that BatCoV HKU4, but not HKU5, can recognize human CD26 as a functional receptor for cell entry. HKU4 S is composed of 1352 residues ([Figure 3](#fig0015){ref-type="fig"}A) and can readily interact with human CD26 [@bib0375]. But it does not contain a clear furin-recognition site [@bib0145] and is expressed as an intact protein in 293T cells, remaining uncleaved upon incorporation into the pseudoviral envelope. Accordingly, the BatCoV HKU4 pseudovirus was unable to infect cells expressing human CD26 [@bib0375]. But potential trypsin-cleavage sequences can be identified in two regions homologous to the S1/S2 and S2′ sites of other CoVs [@bib0145], and trypsin treatment indeed efficiently primes HKU4 S and leads to sufficient pseudoviral transductions [@bib0375]. These observations revealed the fact that the inability of HKU4 S to drive entry into human cells (and thus, potentially, to be transmitted to humans) is due to lack of priming and not to lack of receptor engagement, highlighting once again the indispensability of S cleavage in coronavirus infection. Despite lacking recognizable sites for furin, it remains to be investigated whether HKU4 S might be activated by any other commonly observed priming proteases, such as TMPRSSs and cathepsins. Special attention should be paid to virus variants that are more susceptible to protease cleavage by host enzymes other than trypsin.Figure 3Bat coronavirus (BatCoV) HKU4 spike features. **(A)** Schematic representation of the HKU4 spike protein. The listed component boundaries are mostly defined according to the bioinformatics analyses, except for the RBD which has been experimentally characterized [@bib0375]. The cleavage sites for S1/S2 and S2' were predicted based on the homology sequence comparison with other coronaviruses and are therefore labeled with question marks. Abbreviations: SP, signal peptide; NTD, N-terminal domain; RBD, receptor-binding domain; HR1/2, heptad repeat 1/2; TM, transmembrane domain; and CP, cytoplasmic domain. **(B)** Crystal structure of HKU4 RBD. The external and core subdomains are colored magenta and green, respectively. **(C)** Complex structure between HKU4 RBD and human CD26. The coloring scheme is: RBD core, green; RBD external, magenta; receptor β-propeller domain, cyan; and receptor α/β-hydrolase domain, orange. **(D)** The HKU4 RBD is suboptimal for CD26 interaction compared to Middle East respiratory syndrome coronavirus (MERS-CoV) RBD [@bib0375]. The 18 CD26-interfacing residues in MERS-CoV RBD, as listed in [Figure 2](#fig0010){ref-type="fig"}D, were individually compared with the equivalent amino acids in HKU4 RBD. The numbers highlight the van der Waals contacts each residue can provide for interacting with CD26. '\\>' indicates that the MERS-CoV residues are better adapted for CD26-binding, and conversely, '\\<' implies that the HKU4 amino acids are better adapted. The residue differences are highlighted with red arrows.
The RBD of BatCoV HKU4, which spans residues 372--611 ([Figure 3](#fig0015){ref-type="fig"}A), has also been structurally characterized [@bib0375]. It displays a fold that resembles the MERS-CoV RBD ([Figure 3](#fig0015){ref-type="fig"}B) and utilizes a conserved receptor binding mode for interaction with CD26 ([Figure 3](#fig0015){ref-type="fig"}C). Interestingly, of the 18 identified CD26-interfacing residues in MERS-CoV RBD, 11 amino acids are mutated and 15 are suboptimal for receptor interaction in HKU4 RBD ([Figure 3](#fig0015){ref-type="fig"}D) [@bib0375]. Nonetheless, a pseudoviral infection assay demonstrates that HKU4 S is able to mediate virus entry, although less efficiently than MERS-CoV S. These results indicate that dramatic changes at this 18-residue interface do not necessarily abrogate the interaction between viral S and CD26, which in return provides the space for MERS-CoV and the related viruses (e.g., BatCoV HKU4) to evolve to escape from the neutralizing antibodies targeting the RBM and to facilitate interspecies transmission. It is also notable that BatCoV HKU4 exhibits better binding capacity for bat CD26 than for human CD26 [@bib0595], but a converse CD26-interaction has been reported for MERS-CoV [@bib0595]. This implies a common ancestor in bats for MERS-CoV and BatCoV HKU4, which divergently evolved for better interaction with the human and bat receptors, respectively. These studies also indicate the need for surveillance of HKU4-related viruses for their cross-species potential in the future.
It is notable that SARS-CoV seems to 'tolerate' large variations in the receptor (as illustrated in ferret ACE2 with half of the interfacing residues being substituted). Small variations in the viral RBD (with N479K and T487S), however, can lead to altered receptor-binding specificity, dramatically decreasing its affinity for human ACE2. In contrast, MERS-CoV likely only recognizes conserved CD26 sequences with a maximum of two mutations in the RBD-binding region. Nevertheless, the capacity of receptor engagement can still be reserved despite dramatic changes in the viral ligand (as demonstrated in HKU4 RBD). These differences could indicate different evolutionary and interspecies transmission routes between SARS-CoV and MERS-CoV, which would be an interesting issue awaiting answers.
Concluding remarks {#sec0025}
==================
The emergence of two betaCoV-related epidemics in the past decade revitalized CoV research, focusing on the interspecies transmission mechanisms of these viruses. The CoV S protein is a key factor in determining viral tissue tropism and host range. Much progress has been made thus far regarding the features of S, the interaction of S with receptors, and the priming of S by host proteases. Although SARS-CoV represents one of the best studied models for which the cross-species transmission route has been well established, many questions related to MERS-CoV interspecies transmission remain unanswered ([Box 1](#tb0005){ref-type="boxed-text"} ). These include, but are not limited to, the structure and function of the S NTD, the composition of the fusion peptides, the key determinants in S for CD26 interaction, and the virus/host interplay determining the entry route of the virus. Such questions should be systematically addressed in the future. It is also noteworthy that all current views on CoV S are built on the discrete functional domains. An intact S structure is not available for any CoV, although the low-resolution electron-microscopy structure of SARS-CoV S has been reported [@bib0600], [@bib0605]. Having an intact S structure with high resolution would be an interesting issue deserving even higher priority ([Box 1](#tb0005){ref-type="boxed-text"}). In summary, this review focused on our understanding of the coronaviral S proteins to illustrate the interspecies transmission basis of SARS-CoV, MERS-CoV, and beyond, the knowledge of which should be able to help prevent or predict further transmission events.Box 1Outstanding questions•The fusion peptides of MERS-CoV S still await structural and functional characterization. Could any of these fusion peptides be targeted by small molecules to inhibit virus infection?•What will be revealed by systematic and comparative studies on the spatiotemporal characteristics of the enzymes potentially involved in MERS-CoV S-priming among different species?•In the list of the 13 CD26 residues that interface with the MERS-CoV RBD, what residue combination(s) constitute the key component that is indispensable in RBD-binding? The answers to this and the second point would enable us to predict the infection and transmission capacity of MERS-CoV in a specific species.•Is the dromedary camel the only intermediate host of MERS-CoV, or are other animals also involved in the interspecies transmission of the virus from its natural host, possibly bat, to humans? Special attention should be paid to the livestock animals in the first group ([Table 2](#tbl0010){ref-type="table"}) whose CD26 receptors are able to be recognized by MERS-CoV, although no evidence of these animals being infected by MERS-CoV has come to light thus far. In addition, pets such as cats and dogs in the second group ([Table 2](#tbl0010){ref-type="table"}) are in close contact with humans and should be investigated to ensure that they do not carry MERS-CoV.•What S-substitutions are involved in the interspecies adaptation of MERS-CoV? A large-scale genomic characterization of the MERS-CoV isolates from human and dromedaries, and of the MERS-CoV-related viruses from bats, should be conducted, focusing on the residue changes in the receptor-binding region, to determine whether there are any naturally occurring mutations that enhance or decrease its binding capacity for human or camel CD26. It is of equal importance to identify, via artificial substitutions, the key residues determining the preference of MERS-CoV S for the CD26 of a certain species.•What is the role of the SARS-CoV and MERS-CoV NTD in virus infection? Do they share structural features with galectin, as reported in betaCoVs such as HCoV-OC43 and BCoV?•What do we expect to observe at the atomic level in an intact S trimer? An intact S structure has not been solved for any CoV.
Work on coronavirus in the laboratory of G.F.G. is supported by the National Natural Science Foundation of China (NSFC, grant numbers 81461168030 and 31400154) and the China National Grand S&T Special Project (number 2014ZX10004-001-006). G.F.G. is a leading principal investigator of the NSFC Innovative Research Group (grant number 81321063). G.L. is supported by the Excellent Young Scientist Grant from the Chinese Academy of Sciences.
[^1]: The 18 residues in human ACE2 that are identified to interface with SARS-CoV RBD were listed and compared for the conservatism in different species. The letters in red highlight the amino acid mutations at the corresponding positions, which are based on human ACE2 numbering. The ACE2 receptors that can be recognized by the SARS-CoV S protein include those from human, monkey (African green monkey), macaque, marmoset, hamster, cat, civet, raccoon dog, ferret, mouse, and bat (*Rhinolophus sinicus, R. sinicus*), although the mouse and bat (*R. sinicus*) ACE2s are utilized inefficiently. The rat and bat (*Rhinolophus pearsonii, R. pearsonii*) receptors, however, are unable to be used by SARS-CoV. Accession numbers: human (AY623811), monkey (AY996037), macaque (NM_001135696), marmoset (XM_008988993), hamster (XM_005074209), cat (NM_001039456), civet (AY881174), raccoon (AB211998), ferret (AB208708), mouse (EF408740), bat (*R. sinicus*) (GQ999936), rat (AY881244), bat (*R. pearsonii*) (EF569964).
[^2]: The 13 residues in human CD26 that are identified to be key interfacing amino acids for MERS-CoV RBD binding were listed and compared for the conservatism in different species. The letters in red highlight the amino acid mutations at the corresponding positions, which are based on human CD26 numbering. Two groups can be identified: the former (permissive), including human, macaque, marmoset, cattle, horse, goat, pig, camel, sheep, rabbit and bat, has accumulated small numbers (0--2) of mutations in the 13-residue list; whereas the latter (resistant), with cat, dog, ferret, hamster, rat and mouse, contains multiple (≥ 5) substitutions in the region. Accession numbers: human (NP_001926), macaque (NP_001034279), marmoset (XM_002749392), cattle (NM_174039), horse (XP_001494049), goat (KF574265), pig (NM_214257), camel (AHK13386), sheep (XP_004004709), rabbit (XP_002712206), Bat (*Pipistrellus*) (AGF80256), cat (NP_001009838), dog (XP_535933), ferret (KF574264), hamster (XP_007608372), rat (NP_036921), and mouse (NP_034204).
|
Q:
Reading in Erlang the body of a HTTP request as it is received
I've been looking into Mochiweb, but I can't find a way to read the body while I'm receiving the request through the socket.
I'm not limited to Mochiweb, any other HTTP library would be good for me.
I also tried gen_tcp:listen(Port, [{packet, http}]), this way I can read the body/headers while I'm receiving the HTTP request, but I must handle manually the responses and keeping the socket open for more requests, so I prefer not to use this solution.
My intention is to receive request with large bodies and not to wait to receive the full body in order to start reading/processing them.
A:
With mochiweb you can fold over chunks of the request body using Req:stream_body/3.
It expects a chunk handler function as the second argument. This handler is called with
{ChunkSize, BinaryData} and your state for every chunk, as it is received from the socket.
Example (retrieving a [reversed] list of chunks):
MaxChunkSize = 100,
InitialState = [],
ChunkHandler = fun ({_Size, Bin}, State) -> [Bin | State] end,
List = Req:stream_body(MaxChunkSize, ChunkHandler, InitialState),
...
|
Sunday, November 8, 2009
Sunday Soup: French Lentil Soup with Garlic Sausage
Last weekend I went to my favorite cooking store in San Diego, Great-News, to use my coupon that was about to expire. I looked at lots of cool gadgets and supplies, but the item that spoke to me most was this cookbook - Sunday Soup: A Year's Worth of Mouthwatering, Easy-to-Make Recipes by Betty Rosbottom. I love soup, I really do, but my repertoire is very limited, and I tend to stick with the standards. This book is a godsend, as it offers up lots of recipes, but best of all it breaks them down by season. So all the Fall Recipes are grouped together, followed by Winter and so on. That way you can make a soup using seasonal ingredients. And best of all, there is a Summer section full of cool (literally, as in "cold" and "chilled") soups to make.
The first soup I made is "French Lentil Soup with Garlic Sausage." I tried this one because I had the lentils on hand. The recipe calls for Puy lentils from France and I had some leftover from a Jamie Oliver recipe ("Pan-seared Scallops with Crispy Bacon and Sage, Puy Lentils and Green Salad" from The Naked Chef cookbook. It was a lot of work, but very tasty). I searched far and wide and eventually found a substitute for them at Whole Foods. I thought about getting the "real" version from an online store, but these worked out great.
I only made half of the recipe, but that was enough for 3 good-sized servings. The house smelt so good when the soup was simmering (it was a particularly cold night, so that probably made the whole experience that much more delicious). I used pork kielbasa along with the puy lentils (oh the fun of running around saying "pwee" and KIL-basa, like Mufasa from the Lion King) and the result was a hearty bean and sausage soup. It's not the prettiest soup around, but very tasty. Held up nicely the next day for lunch too.
I'll let you know how I progress with the rest of the recipes. Next up - Butternut Squash and Apple Soup with Cider Cream. Yum!
French Lentil Soup with Garlic SausageRecipe from Sunday Soup by Betty Rosbottom6 servings
Heat oil in a large pot (with a lid) over medium heat. When hot, add the carrots, onion and celery. Cook, stirring often, until the vegetables are just softened, for about 5 minutes. Add the garlic, sausage, and thyme, and cook 1 minute more.
Add the stock and bay leaves, and bring mixture to a simmer over high heat. Stir in the lentils, then reduce heat, cover, and cook at a gentle simmer until tender, for about 50 minutes.
Remove and discard the bay leaves. Remove the garlic pieces and transfer to food processor. Using a slotted spoon, strain 1/2 cup solids (vegetables and sausage) from the soup and puree them with garlic pieces in a food processor, or combine them with garlic pieces in a small bowl and smash with the back of a fork. Stir the pureed mixture into the pot; this will thicken the soup slightly. Taste soup and season with salt, as needed. (Soup can be prepared 1 day ahead; cool, cover and refrigerate. Reheat over medium heat.)
To serve, ladle the soup into 6 soup bowls and sprinkle with parsley over each serving. |
Q:
Advanced recyclerview adapters
I have been trying to find information about creating advanced RecyclerView adapters.
I need two versions of adapters for my application.
The first one is similar to the Google Keep checklist. When you are adding a new item to the list, it adds a new cell at the end of the list.
ex: create a product list or wishlist.
I can already create an adapter with a multiple ViewTypes, but I can't find a solution to my problem (adding a new cell when adding a new item to the list)
second one is I need to have a list with multiple types of layouts. ex: Notes, Birthdays, Important, etc. with I separators in between them.
And when I delete all Birthdays from the list. How can I delete Birthday separator?
A:
To answer your question:
Adding new cell at the bottom of the list when adding an new item
Setup listener on a view on item layout and add new item to the end of the list
ex: For Google Keep example implement code to add new item to the end of the list inside edittext view TextChangeListener's onTextChanged method on the item layout.
inner class TaskViewHolder(mView: View) : RecyclerView.ViewHolder(mView) {
val etTask: EditText = view.findViewById(R.id.et_task)
init {
etTask.addTextChangedListener(object : TextWatcher {
override fun afterTextChanged(p0: Editable?) {
}
override fun beforeTextChanged(p0: CharSequence?, p1: Int, p2: Int, p3: Int) {
}
override fun onTextChanged(p0: CharSequence?, p1: Int, p2: Int, p3: Int) {
// add new item to the end of the list here
}
})
}
}
Removing the separator when all the items in a group was deleted
You could check to see if the list items in the group is empty and set the visibility of the separator to GONE
ex:
// here birthdays is ArrayList<Birthday>
if(birthdays.isEmpty()){
separator.visibility = View.GONE
}
|
Planciusdalen
Planciusdalen is a valley in Gustav V Land at Nordaustlandet, Svalbard. It is a continuation of the bay Planciusbukta, and stretches southwards to Rijpfjorden. The valley is named after Dutch cartographer Petrus Plancius.
References
Category:Valleys of Svalbard
Category:Nordaustlandet |
Q:
SQL Ordering Results from Other Query
I have two tables:
`users`
`id` INT(11)
`name` VARCHAR(30)
`items`
`id` INT(11)
`name` VARCHAR(30)
`owner` VARCHAR(30)
And here is my problem: I'm trying to order the users table by the number of unique items that they have in the items table.
To get unique number of items:
SELECT * FROM `items` WHERE `owner`='".$ownername."' GROUP BY `name`
Running this query gives me the number of unique items for some $ownername.
What I'm trying to do is order the users table by the number of unique items that they have in the items table, and I'm not sure how I would do that.
A:
You can join, group and sort by count:
SELECT u.name
FROM users u
JOIN items i
WHERE u.owner='".$ownername."'
GROUP BY u.name
ORDER BY COUNT(i.id) DESC
Or you can sort by the result of a subselect that counts the items per user:
SELECT u.name
FROM users u
WHERE u.owner='".$ownername."'
ORDER BY
(SELECT COUNT(*) FROM items i WHERE i.owner = u.name) DESC
The latter you could also write slightly differently if you want to show the value too:
SELECT u.name,
(SELECT COUNT(*) FROM items i WHERE i.owner = u.name) as item_count
FROM users u
WHERE u.owner='".$ownername."'
ORDER BY
item_count DESC
|
Q:
How to add custom methods to ASP.NET WebAPI controller?
In ASP.NET MVC WebAPI project by default we have created following controller
public class ValuesController : ApiController
{
// GET api/values
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
// GET api/values/5
public string Get(int id)
{
return "value";
}
// POST api/values
public void Post([FromBody]string value)
{
}
// PUT api/values/5
public void Put(int id, [FromBody]string value)
{
}
// DELETE api/values/5
public void Delete(int id)
{
}
}
But is possible to add here any custom methods so they can support get/post as well?
Thank you!
A:
You can use attributes such as the RoutePrefix with the Http type.
[Route("ChangePassword")]
[HttpPost] // There are HttpGet, HttpPost, HttpPut, HttpDelete.
public async Task<IHttpActionResult> ChangePassword(ChangePasswordModel model)
{
}
The http type will map it back to its correct method in combination with the Route name.
A:
I am not sure I follow as you have GET and POST right there in your code, but in any case you have other options:
Option 1
First, you can configure your custom Routes in the App_Start folder in the WebApiConfig.cs file. Here is what I normally use:
// GET /api/{resource}/{action}
config.Routes.MapHttpRoute(
name: "Web API RPC",
routeTemplate: "{controller}/{action}",
defaults: new { },
constraints: new { action = @"[A-Za-z]+", httpMethod = new HttpMethodConstraint("GET") }
);
// GET|PUT|DELETE /api/{resource}/{id}/{code}
config.Routes.MapHttpRoute(
name: "Web API Resource",
routeTemplate: "{controller}/{id}/{code}",
defaults: new { code = RouteParameter.Optional },
constraints: new { id = @"\\d+" }
);
// GET /api/{resource}
config.Routes.MapHttpRoute(
name: "Web API Get All",
routeTemplate: "{controller}",
defaults: new { action = "Get" },
constraints: new { httpMethod = new HttpMethodConstraint("GET") }
);
// PUT /api/{resource}
config.Routes.MapHttpRoute(
name: "Web API Update",
routeTemplate: "{controller}",
defaults: new { action = "Put" },
constraints: new { httpMethod = new HttpMethodConstraint("PUT") }
);
// POST /api/{resource}
config.Routes.MapHttpRoute(
name: "Web API Post",
routeTemplate: "{controller}",
defaults: new { action = "Post" },
constraints: new { httpMethod = new HttpMethodConstraint("POST") }
);
// POST /api/{resource}/{action}
config.Routes.MapHttpRoute(
name: "Web API RPC Post",
routeTemplate: "{controller}/{action}",
defaults: new { },
constraints: new { action = @"[A-Za-z]+", httpMethod = new HttpMethodConstraint("POST") }
);
I use a combination of RESTful endpoints as well as RPC endpoints. For some purists, this is grounds for a holy war. For me, I use a combination of the two because it is a powerful combination and I can't find any sane reason not to.
Option 2
As the others have pointed out and as I myself am doing more of these days, use attribute routing:
[HttpGet]
[GET("SomeController/SomeUrlSegment/{someParameter}")]
public int SomeUrlSegment(string someParameter)
{
//do stuff
}
I needed a NuGet package for attribute routing to make this work (just search NuGet for "Attribute Routing"), but I think that MVC 5/WebAPI 2 has it natively.
Hope this helps.
A:
You could use attribute routing:
[Route("customers/{customerId}/orders")]
public IEnumerable<Order> GetOrdersByCustomer(int customerId) { ... }
Some documentation to get you started:
http://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2
|
According to witnesses, Beard punched Tucker for unknown reasons and the boy ran to his home, just a few houses away, to tell his mother. When they returned to confront Beard, someone in a car allegedly screamed a derogatory term at Tucker’s mother, prompting the boy to chase the vehicle. Gunshots were fired and an unknown truck sped away. When the smoke cleared, Tucker was left bleeding to death in an alley with gunshot wounds to his head and thigh.
Neighbor Elizabeth Reeves, 47, ran to the alley after hearing Tucker’s mother’s scream, “My son! My son!” She discovered the young boy lying on the ground, with only his legs barely moving.
Beard was released from the Indiana Department of Correction last September after serving about two years of a six-year sentence for unlawful possession of a firearm by a serious felon.
According to online DOC records, the Marion County conviction was for a Class B felony, just below the most serious Class A felony category.
Beard appears to have a long criminal record, including a 1985 knife fight over a motorcycle that left another man dead and Beard critically injured. Beard and several others later were indicted on reckless homicide and other charges.
Beard also was sentenced to prison terms in 1989 for B-felony dealing drugs and in 1993 for D-felony possession of marijuana.
Beard had left the shooting scene on Sunday night before officers arrived, said officer Kendale Adams, a spokesman for the Indianapolis Metropolitan Police Department.
Tucker was transported to Riley Hospital for Children at IU Health, where he later died. Neighbors have already begun building a memorial for the boy 5 houses down from his home where he was shot. |
Francis Moncreiff
Hon. Francis Jeffrey Moncreiff (27 August 1849 – 30 May 1900) was a Scottish rugby union player, and 's first captain, making him one of the first two captains in international rugby. He was capped on three occasions between 1871 and 1873 for .
Personal history
Moncrieff was born in 1849, the second son of James Moncreiff, 1st Baron Moncreiff of Tulliebole. He attended Edinburgh Academy. On 29 October 1880 he married Mildred Fitzherbert, daughter of Lt Colonel Richard Henry Fitzherbert.
Rugby career
On 27 March 1871, Moncreiff was selected to represent Scotland in the first international rugby union game and to captain the team. He played club rugby for Edinburgh Academicals.
Notes
References
Bath, Richard (ed.) The Scotland Rugby Miscellany (Vision Sports Publishing Ltd, 2007 )
Category:1849 births
Category:1900 deaths
Category:Scottish rugby union players
Category:Scotland international rugby union players
Category:People educated at Edinburgh Academy
Category:History of rugby union in Scotland
Category:Edinburgh District players |
Ahmad Bahar
Haj Sheikh Ahmad Bahar (1889 Mashad, Iran - 1957 Tehran, Iran) was an Iranian politician, a patriotic poet, prominent journalist, writer, publisher and farmer.
Literary career
He was one of the best students of late Sheikh Abdoljavad Adib Neishaboori in Mashad in the field of Persian and Arabic literature. He was a Journalist and started his own printing and publishing company in Mashad with a Heidelberg press purchased during an eventful trip to Europe. Along with his famous poet and politician cousin Mohammad-Taghi Bahar (aka Sabouri), later known as Malek o-sho'ara Bahar and he was editor of his cousin's newspaper published in Mashad by his printing company called "Now Bahar" from 1915 to 1917. The printing company remains in existence and produces a high circulation daily newspaper "Khorasan" in the same premises and is also available via internet, reporting daily news for Iran's second largest metropolitan region.
Bahar is known as one of the masters of patriotic and political poetry utilizing Khorasani Dialect.
Political career
Bahar and his cousin were founder members of the Democratic Party of Khorasan and contributed to the development of democratic values and encouraged the public to learn about Iran's national interests.
He was owner and editor of the "Bahar" influential newspaper that was published in Mashad during first world war and in Tehran during 2nd world war. He was invited by Ahmad Ghavam Ghavomolsaltaneh the Prime Minister to join Government service in 1941 as a Special Secretary to Prime Minister, as well as Press Secretary at the office of Prime Minister.
He continued the job of Special Secretary to many Prime Ministers including Dr. Mohammed Mossadeq. In addition to his old position during Mossadegh's Premiership and Nationalization of Oil Industry, he was promoted to be Chief of Staff of the Prime Ministerial Office too.
He was twice elected as member of parliament Majles from Mashad but on both occasions, the Imperial Court exercised its dictatorial power (i.e. Reza Shah and his Son Mohammad Reza Pahlavi) and he was not allowed to serve. On occasion of the popular and religious rise of people of Khorasan in summer of 1935, Bahar was accused of collaboration with organizers of this demonstration in Gowhar Shad Mosque and shrine of Imam Reza in Mashad and jailed for two Years and then exiled from Mashad to Tehran. Nineteen members of the so-called Islamic Revolutionary Council of Iran in 1979 were also prosecuted for having a role in the popular riot of Gowharshad Mosque.
Personal life
Bahar was a fourth generation descendant of Erekle II who was part of the Georgian Bagrationi dynasty. The Bahar family ancestry can thus be traced back over 1000 years.
Two of King Erekle's sons, who were also half brothers of King Erekle's Heir George XII of Georgia, Zorab and Alexander of Georgia were military leaders on behalf of the Persian Shah (King of Kings) Fat'h Ali Shah Qajar in the Russo-Persian War (1804-1813) for continuation of Iranian rule in Georgia and eventually lost and were brought to Iran by Abbas Mirza (who was the Crown Prince and Commander of Iranian Forces in Georgia).
Abass Mirza asked King Fat'h Ali Shah to keep them honorably and give them jobs in the Imperial Court. Two brothers changed their names to Sohrab and Eskandar Mirza, and converted to Islam. Sohrab was appointed Court Cashier called Naghdi, and founded the Naghdi family surname in Iran. After a series of disappointments trying to regain the Georgian Kingdom. One of Eskandar Mirza Khan's children Afrasiab Khan converted to Islam and became a trader of stain glass products (from Russia) in the Tehran bazaar, and eventually moved to Mashad to be close to the Shrine of Imam Reza. Afrasiab Khan reportedly had a family and his oldest son was Haj Abbas Gholi, who then had nine children. Gholi's four sons were Haj sheikh Ali Asghar, Haj Sheikh Mohamad Kazem, Haj Sheikh Mohamad Ali (Moin-o-raia), Haj Sheikh Mohamad Javad (died suspiciously in Majlis as the representative for Mashad). Gholi's daughters included Sakineh Tehranian (Sabouri) who was Mohammad-Taghi Bahar's (aka. Malek-ol-shoara Bahar) Mother. Ahmad Bahar was the oldest son of Haj Sheikh Mohamad Kazem, and cousin of Mohammad-Taghi Bahar (aka. Makel-Ol-Shoara Bahar). They served together in government, and worked together in the Bahar Newspaper.
Because of his move from Tehran to Mashad his surname along with his immediate family and most of his relatives in Mashad changed to Tehranian or Tehrani (which means from Tehran).
Bahar carried this surname until Reza Shah decreed that all citizens must have a registered surname (not common at the time in Iran). So Sheikh Ahmad Tehrani or Tehranian chose the new surname of Bahar for the first time in Iran because of the good name of his newspaper; and he was also known as Sheikh Ahmad Bahar in many official circles. His cousin Mohammad-Taghi Bahar who family name was Sabouri, also used a pen name of "Bahar" and officially registered the surname of Bahar in Tehran because at that time Iranian law would only allow one surname of each type in each city.
Bahar had five sons and two daughters as follows: his first son, Habib Bahar, is a lawyer and was also a member of Iran's Majlis (Parliament) from Mashad. His 2nd son is Rashed Bahar, Agricultural Engineer and is now a Retired Officer for the World Health Organization. His 3rd Son Dr. Jalil Bahar is a retired Diplomat, for the Ministry of Foreign Affairs (Iran)). His 4th Son Mohammad Reza Bahar is a Retired Colonel of Traffic Police and served his last post as Chief of Metropolitan Traffic Police of Tehran. His 5th son is Dr. Kamal Bahar, a Pathologist and Immunologist (Tehran). His Daughters are Bahereh Bahar (Social Worker and retired Senior official of Tehran City Municipality), and Dr. Lili Bahar (Dentist in Tehran).
He died in Tehran at 1957 and buried in the Ebn-e Babveih graveyard close to graves of Dr. Hossein Fatemi, executed Foreign Minister of Dr. Mossadegh, and martyrs of the 30 Tir 1331 Riot (21 July 1952) against Shah and Ahmad Ghavam Ghavomolsaltaneh, the Prime Minister.
References
"Shenasnameh" (1990), (which means "Identity Card") A biography of Bahar's Political life and his poems collected by his 3rd Son Jalil Bahar and Majid Tafreshi was printed in Tehran
Fisher, William Bayne (1991), The Cambridge History of Iran. Cambridge University Press, .
Lang, David Marshall (1962), A Modern History of Georgia. London: Weidenfeld and Nicolson.
G. Bournoutian's biography of Prince Alexander of Georgia in Encyclopaedia Iranica.
E. Jassim's biography of Bahar in Encyclopaedia Iranica.
External links
khorasannews.com
iranica.com
Category:Members of the National Consultative Assembly
Category:Iranian publishers (people)
Category:Iranian writers
Category:1889 births
Category:1957 deaths |
Tuned filters have been employed for a number of years to decode scrambled or protected television signals. U.S. Pat. No. 5,168,251 discloses a notch filter, for example, that includes two separate electrically interconnected filter sections mounted on a common circuit board. Connections to the filter sections provided on the circuit board are made via a collet assembly and a terminal that are soldered to the circuit board. The two filter sections are magnetically isolated through an isolation area defined by an isolation shield. The common circuit board is placed within a filter housing having one open end and an integral connector located at the other end. Art end cap is then attached to the open end of the filter housing with a press fit. The filter housing with attached end cap is then located within an outer sleeve by sliding the filter housing into an open end of the outer sleeve. A press-fit is commonly used as the securing mechanism to retain the filter housing within the outer sleeve.
It is important to seal the filter structure to prevent moisture and other contaminants from entering the filter. One particularly difficult area to properly seal is the interface between the collet and the filter housing. Conventional manufacturing techniques attempt to utilize relatively large amounts of sealant to cover over the entire back portion of the collet. This method, however, is somewhat difficult to work with and can result in inconsistencies in the quality of the manufactured filters. It would therefore be desirable to provide a collet assembly that could be quickly and easily sealed during the manufacturing process. |
Remote communication from a mobile terminal: an adjunct for a computerized intensive care unit order management system.
To develop and implement a fully mobile computer terminal that interfaces with our computerized intensive care unit (ICU) local area network and order management system. This system can provide access to the entire network and to order review and entry and during ICU bedside rounds. Descriptive report. Surgical ICU in Department of Veterans Affairs Medical Center. SYSTEM CONFIGURATION: A parallel local area network was configured for the remote mobile computer system. A proprietary remote transmission system (Altair II, Motorola) was used. This high-throughput system minimizes interference and errors by using licensed, nonshared, radiofrequency spectra. The resulting mobile system is an economical and time-efficient adjunct to an established ICU computerized network and order management system. Clinical working bedside rounds are now routinely conducted with the mobile terminal, providing immediate access to full network resources. |
Q:
Arquivo .bat para gerar backup e restore em PostgreSQL
Estou criando arquivos .bat para realizar o backup e restauração de uma base de dados em PostgreSQL utilizo windows 10 e pg 9.4
Realizo o seguinte comando para realizar o backup
set PGUSER=postgres
set PGPASSWORD=postgres123
"C:/Program Files/PostgreSQL/9.4/bin\\"pg_dump.exe --host localhost --port 5432 --format custom --blobs --verbose --file "D:\\bkp.sql" "dbsibcom"
ele funciona perfeitamente.
E para realizar o restore tenho o seguinte comando
set PGUSER=postgres
set PGPASSWORD=postgres123
"C:/Program Files/PostgreSQL/9.4/bin\\pg_restore.exe -i -h localhost -p 5432 -c -d "testrestore" -v " D:\\bkp.sql"
Crio o arquivo bat e tento restaurar, ele apenas abre e fecha a tela do dos rapidamente e não funciona.
Existe algo errado ou outro método de fazer o restore ?
tenho que dar alguma permissão para o pg ter acesso ao meu arquivo de backup ou algo do gênero?
Edição
Também tentei esses comandos para tentar realizar o restore e não funciona
set PGUSER=postgres
set PGPASSWORD=postgres123
C:/Program Files/PostgreSQL/9.4/bin\\pg_restore.exe --host localhost --port 5432 --username "postgres" --dbname "testrestore" --verbose "D:\\bkp.sql"
A:
Considerando:
O nome do banco de dados como: minha_database
O arquivo de backup como: C:\\bkp\\backup_database.dump
Instalação do PostgreSQL 9.1 em C:\\Program Files (x86)\\PostgreSQL
Pra fazer o backup:
C:\\Progra~2\\PostgreSQL\\9.1\\bin\\pg_dump -h servidor -p 5432 -U postgres --inserts -c -f C:\\bkp\\backup_database.dump minha_database
Pra fazer o restore:
C:\\Progra~2\\PostgreSQL\\9.1\\bin\\psql -U postgres -d minha_database -f C:\\bkp\\backup_database.dump
Você pode manter o comando set PGPASSWORD=postgres123 pra não pedir a senha e automatizar a execução.
Você pode colocar um comandou pause no final do .bat para ver qual erro ocorre.
No seu ambiente, o caminho da pasta bin\\ será: C:\\Progra~1\\PostgreSQL\\9.4\\bin\\. Utilize esse caminho para colocar os comandos.
O arquivo de dump será gerado em formato sql, e em texto puro, sem nenhuma proteção / criptografia.
|
SARLEY: Walleye tournaments return
By STEVE SARLEY
When the World Walleye Association was established in 1999, there were high hopes. Its popularity quickly grew, but soon began to fade. Last year it was on its death bed.
Now, Joe Baron and Theresa Meade from northern Illinois have purchased the assets of the WWA and are restarting the tournament series with two circuits, one on Illinois’ Fox Chain O’ Lakes and another on Wisconsin’s Lake Winneconne system.
Baron is the former president of the AIM walleye circuit, and Meade is a respected and successful walleye tournament angler who had fished the WWA since its inception.
Baron told me, “We think that the WWA can be a success and grow in the future. Our entry fee is an affordable $180 per two-angler team. The events are one-day contests, Sundays only. You don’t have to take a week off of your job to fish the WWA tournaments. Best of all, these contests are perfect for local fishermen to compete close to home on waters that they are familiar with. We will stage a championship event in early 2014, too.”
Scott Duncan of Antioch, a touring pro and past AIM winner, is planning on fishing in the WWA. So is walleye legend Mike Gofron. He told me, “I learned how to fish for walleyes on the Chain. I’ve traveled so much over the past many years that I haven’t fished the Chain in a long time. That waterway has changed so much, this will be like I am the new guy on the Chain in these tournaments. I’ll have to get out there and try to figure it out again.”
The Fox Chain qualifying event dates are April 7, April 21, June 9, Sept. 15 and Oct. 6. Aug. 11 will feature a special Youth Day event.
The Winneconne system qualifying event dates are April 14, May 19, Aug. 4, Sept. 29 and Oct. 20. Sept. 8 will be Youth Day.
Winneconne also will be the site for a special “Woman’s Weekend” featuring a tournament and seminars.
In addition to prize money at each individual qualifying event, the team anglers in both divisions will compete for Cabela’s NTC spots, team of the year awards and 30 spots available for the World Walleye Association Championship in early 2014.
Follow the WWA on Facebook or visit the website at worldwalleye.com for additional information as it becomes available.
AIM event returns
Last summer, I was fortunate enough to fish as a co-angler at the Angler’s Insight Marketing Green Bay tournament out of Oconto. I wrote a couple of columns about the great learning experience I had. AIM has cut down to one event this year, but the one event is at Oconto again July 18 to 20.
Tournament entry fees are $1,200 for each Pro/Partner and $300 for each co-angler. Visit AIM at www.aimfishing.com for more information.
Petros in Crystal Lake
Hall of Fame angler Spence Petros will be at Dave’s Bait, Tackle and Taxidermy in Crystal Lake from 10 a.m. to 1 p.m. Saturday as part of the store’s 24th anniversary sale Friday through Sunday.
• Northwest Herald outdoors columnist Steve Sarley’s radio show, “The Outdoors Experience,” airs live at 5 a.m. Sundays on AM-560. Sarley also runs a website for outdoors enthusiasts, OExperience.com. He can be reached by email at sarfishing@yahoo.com. |
2. Register at WinilaCity at www.winilacity.com (so you would have a chance to win the Mac Book!)
3. Post a status message -
Win PancakeHouse GC from @ruthilicious! Like @WinilaCityPhilippines and register at www.winilacity.com for a chance to win Mac Book!
Make sure that the privacy setting is set to everyone so I can see your entry in my wall and verify your entry.
Also, make sure that you tag WinilaCity Philippines in the post.
4. Come back to this blog post and comment below:
Name:Facebook Name:
Easy right?
Contest is up until April 18, 2011. I am extending the contest until Sunday, April 24th as I still see comments and entries coming! Like, share and post away!
Make sure that you are tagging correctly and it is viewable to EVERYONE. Don't forget to comment on the comment box below. For those who did it incorrectly, you still have a chance to correct your entries! |
Since opening in 2011, it has sent more than 15 unicorns out into the world, including Uber and Spotify. Today, it continues to attract tech entrepreneurs and high growth startups to its campus, but there’s a twist: it’s attracting big corporations, too. In fact, only 50 percent of RocketSpace’s business revolves around providing space and services for startups. The other 50 is all corporate innovation consulting, helping companies like AT&T, RBS, and JetBlue to ward off disruption.
It’s all part of founder Duncan Logan’s vision for his growing company: to be a tech ecosystem, where the office space they provide is merely a platform for the “curated community” and other services they offer to members.
“We spend a lot of time building relationships outside of our four walls,” said Logan, adding that their knack for getting startups funded has less to do with the space and a whole lot more to do with the relationships that RocketSpace has built with VCs and other industry players.
“We’re a relationship broker,” he said. “Corporates come to us because we have such great startups, but now the startups come here and you have immediate access to the C-Suite of big corporations.”
RocketSpace isn’t — forgive us — rocket science, but we think Logan is on to something. He’s offering an iteration of the coworking model that adds value for his company, its members, and for the property owner from whom he leases his spaces. And it’s not just because it’s San Francisco, and it’s not just because it’s tech (the valuations of the RocketSpace alumni network might be, but the success of the model isn’t). The key thing is that RocketSpace is focused on one industry and consistently nurtures the ecosystem that they’ve built around it.
But you don’t have to take it from us. We talked to Logan recently to find out more about his plans for RocketSpace, why he thinks the traditional coworking model is wrong, and what he’s going to do about it.
Image courtesy of RocketSpace.
Start by giving us a little bit of background on RocketSpace.
We started in January 2011. The whole concept was based around two things: the explosion of tech, [and that] RocketSpace is really a glorified coworking space. We call ourselves a tech campus because we only take tech companies. There are two conditions [for joining RocketSpace]: One, you must be a tech company. Two, you must have raised at least one round of finance.
So you’re not an incubator?
We’re more of a — once you’ve started the business, you’ve raised the money — we help accelerate. So I wouldn’t call us coworking. We’re a more specialized animal than that. Currently we have around 175 companies on campus. The maximum company size is around 75-80 people. [But] we’re totally focused on the quality of the company, not so much the volume.
That ties into the fact that only 50 percent of our business is around the startups and the space. The other 50 percent is corporate innovation consulting. And the reason the corporates are so interested in us is that in five years, we’ve had 16 unicorns, including Uber, Spotify, Leap Motion, Kabam, and Supercell.
Collectively the companies in RocketSpace have raised over $10 billion. It’s a pretty unique environment for high-powered startups.
So you also bring in large corporate companies looking to tap into startup talent?
Yes. These are well-known companies from banks, to retailers, to energy companies. Typically these corporations would have fended off disruption with their in-house R&D labs, but they can’t keep up. [For example], Airbnb is really a technology company, it’s not in the hospitality business. But by God, it’s disrupting the hospitality industry.
Image courtesy of RocketSpace.
Where does RocketSpace fit in on the office space as a service spectrum?
The space thing’s really interesting. When we started, we did a revenue share deal and, actually, ironically it’s one of the best deals we’ve ever done. It was good for us and good for the landlord. It made more sense: it’s more like a hotel model than a lease arbitrage model, which is what the coworking industry is set on.
There’s a spectrum of office space as a service and coworking is one aspect of it. There’s people storage, then WeWork, then something more specialized, like a RocketSpace, to finally something totally niche like Y Combinator.
I think a large chunk of the market — 10, 15, even 20 percent of the real estate industry — will move to some sort of office space as a service market. We haven’t even seen the tip of the iceberg. The news is that I think the model is wrong. The lease arbitrage model looks brilliant if there’s a ton of demand, but when you go through a cycle…
That’s the big risk with WeWork, right?
They’ve probably got deep enough pockets and investors that they’ll work through. But there are a lot of other people who are far thinner capital-wise who I think it will become increasingly hard for.
It’s sort of like Napster to iTunes. Napster proved that there’s a big market, but it wasn’t the right model.
Tell us more about your first building, and the revenue share model.
We started in 15,000 square feet and ended up in 45,000 [they expanded to occupy the full building]. Then we moved into two buildings in the Financial District. Both of those buildings were not temporary, but, my big thing with RocketSpace is that I’m not in any rush to roll out the wrong model. I think coworking has a pretty strong last mover advantage. If you’re a transatlantic airline and you see people changing from seats to beds, [there’s an advantage in] the ability to watch the other airlines, and then improve it: to be the last airline to have beds.
The coworking market is like the hotel market. It’s just not the case that because San Francisco’s got 20 hotels, we can’t build a hotel. Today RocketSpace occupies 55,000 square feet of space. We do have ambitions to grow out our real estate, but for us real estate is just a platform. Other coworking spaces say they want 500,000 square feet and X many members. That only represents a small chunk of our revenue and our ambitions. We’re moving into a different model in San Francisco, and we’re looking at this model in other locations, as well. We’ve done the math around the model and we now feel very confident [that it works].
How does it work? How is it different from the lease arbitrage model?
I think the closest cousin to coworking is the hotel industry. We’ll end up with a Marriott, a Hilton, a Four Seasons, and a Motel 6 version of coworking. Regus is now a house of brands. You can compare that to a Marriott house of brands. I think the angle of trying to be all things — a Four Seasons wanting to offer something to a traveling family and a high-powered executive — you do that, you struggle.
Image courtesy of RocketSpace.
Are you still doing a traditional lease on your space today?
Today we are, but probably won’t be moving forward. One of the things we did early on was hire a deep technical financial analyst — look at where the money flows, look at where the risk is, and really look at it from an investor standpoint. Today that’s why I think there’s a mismatch. Signing up on the lease arbitrage model, well, they think there’s risk, but there’s the deposit, etc. The deposit doesn’t really cover their risk.
You mentioned the “science” of coworking earlier — tell us more.
I think it’s driven by a focus on the ecosystem rather than a focus on filling desks.
I often say the difference between a coworking space and a commercial office is the same as the difference between a private jet and a commercial jet, where every square inch of fuselage has a revenue offer. There’s a way you build out the space differently. WeWork sees this, but others don’t.
For RocketSpace, we think of ourselves as an ecosystem, and the building’s the nucleus of the ecosystem. We spend a lot of time building relationships outside of our four walls. We could have an ecosystem manager — they’d spend most of the time outside the building. Whereas if you look at a traditional coworking space, they’ll say we need a community manager, looking inward, making sure everyone within the building is doing well. Driving the ecosystem is a broader way of making sure [everyone is doing well].
This is where it plays to our focus. We’re not trying to be everything to everyone. Our ability to get people funded because we have a symbiotic relationship with VCs; we’re a relationship broker between parties.
Corporates come to us because we have such great startups, but now the startups come here and you have immediate access to the C-Suite of big corporations. Again: relationship broker.
Then there’s the general interaction between [members]. Probably the biggest reason to go to Stanford is the people you meet there. Quality begets quality and that’s something we play into a lot.
You said the space is at the core of the ecosystem. How is it working? What types of space do you have? Is it evolving?
One of the beauties is that we’re a B2B business but eat and live with our customers day-to-day. Very few B2Bs have that opportunity. We’re very aware of change and demand. When we started, RocketSpace was a stepping stone to companies getting their own office. Typically, the companies were small, they were all startups. Over the course of the five years we’ve been in business, we’ve seen the size of the company get a lot bigger. They expect to be here for two to three years. They’re 30 person companies. A company might say they’ll have three key hubs in the U.S. — New York, Chicago, and LA — but then we’ll have a few spoke locations and we’re just going to use an office as a service provider. That’s what we would now call our “labs” product. For most companies under 10 people, they can live in that environment. Early stage startups are kind of like cults insofar as there is a leader with a vision and everyone works tirelessly to achieve that vision. But once you get over 10 people, they move from a cult to start having to think about culture. Culture comes in when everyone doesn’t have a close relationship with a CEO and that became tricky with an open floor plate, that’s when people start wanting private space. |
Introduction
============
Silver sulfide, a well-known type of direct and narrow band gap semiconductor,^[@cit1]^ has attracted considerable attention due to its good stability, low toxicity^[@cit2]^ and extensive potential applications in photovoltaic cells, photoconductors,^[@cit3]^ infrared detectors^[@cit4]^ and near-infrared imaging.^[@cit5]^ Besides a band gap of 0.9--1.1 eV for bulk α-Ag~2~S,^[@cit6]^ diminishing the size of Ag~2~S to the nanometer scale provides an efficient way to finely tune the band gap of this material based on the quantum confinement effect, thus resulting in many intriguing size-specific optical and optoelectronic properties.^[@cit7]^ In this regard, different synthetic methods^[@cit6]^ *e.g.* the microemulsion approach^[@cit8]^ and the hot injection method^[@cit9]^ have been comprehensively explored during the past two decades with the aim of achieving uniformly sized Ag~2~S nanocrystals. However, the use of exotic ligand- or surfactant-stabilized silver and sulfide ions or their precursors and the requirement for elevated temperature and high pressure in most cases made these synthetic protocols arduous. Obviously, if bulk Ag~2~S solid could be directly transformed into its nano-sized prototype, this would be a concise and ideal synthetic strategy. However, to the best of our knowledge, no reported method for the synthesis of silver chalcogenide nanomaterials involves such a direct transformation.
Recently, the synthesis of silver chalcogenide nanocluster compounds in crystalline form has been reported in the literature.^[@cit10],[@cit11]^ During the reaction between various silver thiolates (SR) and silylated sulfide sources in the presence of coordinative phosphane ligands, nanometer-sized Ag/S/SR silver clusters can be generated at room temperature, but amorphous Ag~2~S is obtained on increasing the reaction temperature. Such biased reaction pathways suggest a possible intermediate role of silver sulfide clusters in the synthesis of silver sulfide nano-objects. Inspired by this understanding, we envision that successful transformation of bulk Ag~2~S to silver sulfide nanomaterials could be initiated with the synthesis of polynuclear silver sulfide clusters directly from bulk Ag~2~S. A combination of top-down (from bulk Ag~2~S to silver sulfide clusters) and bottom-up (from silver sulfide clusters to nanomaterials) approaches may facilitate the bulk-to-nano transformation of Ag~2~S. However, bulk Ag~2~S is known for its very poor solubility (*K* ~sp~ = 8 × 10^--51^ at 25 °C), which makes it a formidable challenge to dissolve bulk Ag~2~S solid and complete the bulk-to-nano transformation by means of cluster intermediates. Very recently, we found that a new class of macrocycles, azacalix\\[*n*\\]pyridines (**Py\\[*n*\\]**), exhibited a positive allosteric effect upon binding with metal ions,^[@cit12]^ which significantly enhanced their affinity to a multimetallic aggregate and led to the formation of silver acetylide clusters by using slightly soluble polymeric \\[RC0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 1111111111111111111111111111111111 1111111111111111111111111111111111 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 1111111111111111111111111111111111 1111111111111111111111111111111111 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 1111111111111111111111111111111111 1111111111111111111111111111111111 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000CAg\\]~*n*~ as starting materials.^[@cit13]^ We have thus conceived a synthetic strategy to implement the bulk-to-nano transformation of silver sulfide, as illustrated in [Scheme 1](#sch1){ref-type="fig"}. A particular polypyridine macrocyclic ligand **Py\\[*n*\\]** will be used to initially facilitate the formation of macrocycle-protected silver sulfide clusters based on its positive allosteric effect. Upon the interruption of coordination bonds between polypyridine ligands and silver atoms *via* protonation, the encapsulated silver sulfide clusters may mutually coalesce to finally produce silver sulfide nanoparticles, which can be stabilized by additional surfactants.
{#sch1}
Results and discussion
======================
Considering the high efficiency of **Py\\[6\\]** (composed of six 1,3-pyridine rings bridged by six N--CH~3~ moieties)^[@cit14]^ in our previous synthesis of a silver ethynediide cluster-encapsulated supramolecular capsule,^[@cit13b]^ we firstly attempted to utilize this macrocyclic ligand to dissolve Ag~2~S solid. When Ag~2~S solid was added to a methanol solution of AgCF~3~SO~3~ (0.2 M), no obvious color change of the solution was observed. However, addition of **Py\\[6\\]** into the Ag~2~S--AgCF~3~SO~3~ mixture led to the appearance of a yellow color very quickly and a gradual decrease in the amount of Ag~2~S solid, suggesting the dissolution of Ag~2~S. It should be noted that silver triflate is essential for dissolving Ag~2~S solid, since mixing only Ag~2~S with **Py\\[6\\]** does not cause any color change. We next carried out ^1^H-NMR analysis of the reaction mixture of Ag~2~S--AgCF~3~SO~3~--**Py\\[6\\]**. In the ^1^H-NMR spectrum, there were three triplet peaks at 8.05 (Ha), 7.87 (Hb) and 7.60 (Hc) ppm corresponding to the pyridyl γ-protons of **Py\\[6\\]** ([Fig. 1a](#fig1){ref-type="fig"}). A typical downfield shift of these peaks relative to neat **Py\\[6\\]** (7.46 ppm for pyridyl γ-protons)^[@cit14]^ suggested the occurrence of coordination between **Py\\[6\\]** and silver ions. Furthermore, diffusion ordered spectroscopy (DOSY) exhibited two diffusion bands ([Fig. 1b](#fig1){ref-type="fig"}), implying the existence of two dominant assembled species in the reaction mixture. As shown in [Fig. 1b](#fig1){ref-type="fig"}, the species corresponding to the signal Hb (denoted as **A**) has a larger diffusion coefficient than the other species (denoted as **B**) accounting for the signals Ha and Hc. The diameter ratio of **A** to **B** in the classical spherical model was deduced as 0.80 based on their respective diffusion coefficients and the Stokes--Einstein equation, which agrees quite well with the ratio (0.78) of the measured distances between the two most separated points of the functional units in the crystal structures of **1** and **2** *vide infra*.
![(a) Partial ^1^H NMR spectrum (400 MHz, CDCl~3~ : methanol-d~4~ (v/v) = 1 : 1, 25.0 °C) and (b) DOSY spectrum of the reaction mixture of AgCF~3~SO~3~, Ag~2~S and **Py\\[6\\]**.](c4sc01884b-f1){#fig1}
The NMR titration experiment of **Py\\[6\\]** with silver triflate clarified that the species **A** was actually derived from the assembly of silver triflate and **Py\\[6\\]**, due to identical proton NMR spectra (see Fig. S1 in the ESI[‡](#fn2){ref-type="fn"}). Single crystals of the species **A** (denoted as crystalline complex **1**) for X-ray crystallographic analysis were deposited from the CH~3~OH--CH~2~Cl~2~ mixed solution of **Py\\[6\\]** and AgCF~3~SO~3~. As shown in [Fig. 2a](#fig2){ref-type="fig"}, the ratio of silver triflate to **Py\\[6\\]** in complex **1** was determined as 3 : 1, giving the formula of **1** as {Ag~3~(**Py\\[6\\]**)(CF~3~SO~3~)~3~(H~2~O)~0.5~}. The central silver atom Ag1 in **1** adopted a linear coordination geometry to bind with the two opposite pyridines of the **Py\\[6\\]**, thus causing the formation of a cage-like structure. This unimolecular folding fashion is similar to our previously reported scenarios for two larger macrocycles **Py\\[8\\]** and **Py\\[9\\]**.^[@cit12]^
![(a) Crystal structure of complex {Ag~3~(**Py\\[6\\]**)(CF~3~SO~3~)~3~(H~2~O)~0.5~} (**1**). Selected bond distances (Å): Ag1--N5 2.154(3); Ag1--N11 2.168(3); Ag2--N9 2.281(4); Ag3--N3 2.284(3). (b) Partial crystal structure of \\[Ag~5~S(**Py\\[6\\]**)\\](CF~3~SO~3~)~3~·CH~3~OH (**2**). Ag5 and Ag6 each have an occupancy ratio of 0.5. The three triflate groups on the top side of the \\[Ag~5~--S\\] cluster are omitted for clarity. Silver--aromatic π interactions are shown by dashed lines. Ag--C distances (Å): Ag1--C11 2.987; Ag1--C19 2.948; Ag2--C7 2.731; Ag2--C35 2.957; Ag3--C23 2.836; Ag3--C31 3.090. (c) Side view of complex \\[Ag~5~S(**Py\\[6\\]**)~2~\\](CF~3~SO~3~)~3~ (**3**) with the central silver sulfide cluster represented by a polyhedron. (d) Crystal structure of \\[Ag~12~S~2~(**Py\\[6\\]**)~2~\\](CF~3~SO~3~)~8~·H~2~O·CH~3~OH (**4**). Two silver atoms each have an occupancy ratio of 0.5. Triflate groups and solvent molecules are omitted for clarity. Color scheme for atoms: Ag, purple; C, gray; H, white; N, blue; S, yellow; F, cyan.](c4sc01884b-f2){#fig2}
The structural analysis of species **B** was complicated. High-resolution mass spectroscopy (HR-MS) of the Ag~2~S--AgCF~3~SO~3~--**Py\\[6\\]** yellow reaction mixture revealed two isotopically well-resolved peaks at *m*/*z* = 1258.9341 and 554.9862 corresponding to the \\[(CF~3~SO~3~)~2~Ag~3~(**Py\\[6\\]**)\\]^+^ and \\[(CF~3~SO~3~)Ag~3~(**Py\\[6\\]**)\\]^2+^ species (Fig. S2[‡](#fn2){ref-type="fn"}), confirming the existence of species **A**. In addition, several peaks corresponding to the species composed of a **Py\\[6\\]** macrocycle and a polynuclear silver sulfide cluster plus some silver triflate groups were found by HR-MS (Fig. S2[‡](#fn2){ref-type="fn"}). For example, a strong peak at *m*/*z* = 1762.5754 can be ascribed to the species \\[Ag~6~S(**Py\\[6\\]**)(CF~3~SO~3~)~3~\\]^+^ and the peak at *m*/*z* = 934.7313 is isotopically in good agreement with the species \\[Ag~7~S(**Py\\[6\\]**)(CF~3~SO~3~)~3~\\]^2+^. We then conducted a crystallization process by adding diethyl ether to the reaction mixture, and a crystalline compound with the formula of \\[Ag~5~S(**Py\\[6\\]**)\\](CF~3~SO~3~)~3~·(CH~3~OH) (**2**) was finally obtained. The crystal structure of **2** ([Fig. 2b](#fig2){ref-type="fig"}) comprises a central sulfur anion that is enclosed by six silver atoms, two of which (Ag5 and Ag6) each have an occupancy ratio of 0.5. This silver sulfide cluster is properly described as a \\[Ag~5~--S\\] aggregate, which is coordinated by a **Py\\[6\\]** macrocycle on one side and is further bonded to three triflate anions on another side. Argentophilic interactions^[@cit15]^ and silver--aromatic π interactions both play a significant role in the stabilization of such a \\[Ag~5~--S\\] cluster situated inside a **Py\\[6\\]** macrocycle. Interestingly, a similar crystallization process but with a longer crystallization time than that for complex **2** resulted in a new crystalline complex **3**, which has a formula of \\[Ag~5~S(**Py\\[6\\]**)~2~\\](CF~3~SO~3~)~3~ based on its crystal structure analysis. As shown in [Fig. 2c](#fig2){ref-type="fig"}, complex **3** also comprises a \\[Ag~5~--S\\] aggregate with *C* ~2~-axis symmetry, which is protected by two face-to-face **Py\\[6\\]** ligands to give a cluster-embedded supramolecular capsule. To the best of our knowledge, this is the first example of a discrete silver sulfide cluster with an inner penta-coordinated sulfide.^[@cit16]^ This \\[Ag~5~--S\\] structural motif is also consistent with the basic structural unit of bulk Ag~2~S.^[@cit17]^ But in contrast, the Ag--S bond lengths in **3** (2.369(3)--2.432(4) Å) are ∼0.2 Å shorter than the values for bulk Ag~2~S^[@cit17]^ and other silver sulfide clusters.^[@cit11],[@cit16],[@cit18]^ Another crystalline complex \\[Ag~12~S~2~(**Py\\[6\\]**)~2~\\](CF~3~SO~3~)~8~·H~2~O·CH~3~OH (**4**) was serendipitously acquired upon reducing the amount of **Py\\[6\\]** employed in the above synthetic procedure for complex **2**. As shown in [Fig. 2d](#fig2){ref-type="fig"}, the dumbbell-shaped \\[Ag~12~S~2~\\] silver sulfide cluster aggregate in **4** can be described as two single sulfide-centered cage-like silver clusters fused by sharing a middle silver atom. In addition, the upper and lower sides of this dumbbell-shaped silver cluster are enclosed by two **Py\\[6\\]** macrocycles similarly to **3**. The total seven silver atoms in the asymmetric unit of **4** share a refined site occupancy of six, since silver atom Ag6 is located at a position of special symmetry while Ag7 is disordered, and both have a refined site occupancy ratio of less than one (see ESI for details[‡](#fn2){ref-type="fn"}). Despite the structural difference of the silver sulfide clusters between complexes **2--4** both in nuclearity number and cluster configuration, the **Py\\[6\\]** macrocycles in **2--4** all adopt a similar quasi-*C* ~3v~ bowl-shaped conformation. Moreover, we substantiated that most silver sulfide cluster species in the reaction mixture of Ag~2~S, AgCF~3~SO~3~ and **Py\\[6\\]** were stabilized by one or two **Py\\[6\\]** macrocyclic ligands. In addition, based on the HR-MS and ^1^H-NMR data and the DOSY result that reflected a size ratio (0.80) comparable to the value of 0.78 in the crystal structures of **1** and **2**, we hypothesized that complex **2**, which is composed of a **Py\\[6\\]** macrocycle and a polynuclear silver sulfide cluster, is probably the second dominant species (species **B**) in the reaction mixture of Ag~2~S, AgCF~3~SO~3~ and **Py\\[6\\]**.
The successful isolation of discrete (in **2** and **3**) and joint (in **4**) silver sulfide clusters by varying the amount of the protective **Py\\[6\\]** proves the viability of synthesizing Ag--S binary nanoparticles through the coalescence and fusion of silver sulfide clusters. In view of the fact that **Py\\[6\\]** can be easily protonated by a strong acid due to the good Lewis basicity of pyridine, we added CF~3~COOH into the filtrate of the reaction mixture of Ag~2~S, AgCF~3~SO~3~ and **Py\\[6\\]** to interrupt the coordination interactions between the central silver sulfide cluster and the surrounding **Py\\[6\\]** macrocycles. The dismantlement of **Py\\[6\\]** led to a clear yellow solution. This solution sample retained its yellow color for an hour, but further standing resulted in a black precipitate. The formation and stepwise growth of metal sulfide nanoparticles was affirmed by transmission electron microscopy (TEM) photographs prepared at different intervals ([Fig. 3a--b](#fig3){ref-type="fig"}). We subsequently employed oleic amine^[@cit9]^ as a surfactant to stabilize the obtained nanoparticles. The resulting yellow solution can retain its solution homogeneity for several days. TEM images of this solution sample (denoted as **D-NP**) substantiated the formation of metal nanoparticles with an average diameter of 4 ± 0.4 nm ([Fig. 3c](#fig3){ref-type="fig"}). Fourier transform IR spectroscopy analysis of a solid sample of **D-NP** prepared by centrifugation clearly showed the absence of **Py\\[6\\]** and the presence of oleic amine molecules in **D-NP** (Fig. S3[‡](#fn2){ref-type="fn"}). This observation verified our above assumption that **Py\\[6\\]**-protected silver sulfide clusters could indeed undergo a deprotection process to act as nuclei and activated monomers for the fabrication of silver sulfide nanoparticles. Notably, we found that the acidified **Py\\[6\\]** ligands can be recycled after neutralization and extraction and then employed for subsequent bulk-to-nano transformation of silver sulfide.
{#fig3}
On the other hand, high-resolution TEM (HR-TEM) of **D-NP** showed very ambiguous lattice fringes, in contrast to the clear pattern in previously reported Ag~2~S nanoclusters,^[@cit19]^ suggesting a poorly crystalline form of **D-NP**. The low crystallinity of **D-NP** was further substantiated by selected area electron diffraction (SAED) (Fig. S4[‡](#fn2){ref-type="fn"}), which exhibited two weak diffraction rings that could be indexed to the (--103) and (232) facets of monoclinic Ag~2~S (JCPDS card 14-0072). Energy dispersive X-ray spectroscopy (EDX) measurements of **D-NP** indicated the presence of elements Ag and S and further gave a Ag/S atomic ratio of 3.5 (Fig. S5[‡](#fn2){ref-type="fn"}). We also conducted XPS analysis of **D-NP**. As reflected in the XPS experiment (Fig. S6[‡](#fn2){ref-type="fn"}), we confirmed the +1 oxidation state of the silver atoms in **D-NP** based on their Ag 3d^5/2^ and Ag 3d^3/2^ binding energy peaks at 374.5 and 368.5 eV, respectively. In addition, the Ag/S molar ratio was determined to be 3.7 based on the XPS data, which is comparable to the energy dispersive X-ray spectroscopy (EDX) result of 3.5. This Ag/S elemental ratio in **D-NP** is larger than the values of around 2.0 in bulk Ag~2~S^[@cit17]^ and previously reported silver sulfide nanoclusters.^[@cit1],[@cit3],[@cit19]^ We hypothesized that such a high Ag/S ratio in **D-NP** may arise from the coalescence and fusion of the silver-rich Ag--S clusters in **2--4** by forming inter-cluster interactions (*e.g.*, argentophilicity) and sharing silver atoms similarly to the above-mentioned scenarios in **4**. In addition, the absorption spectrum of the cyclohexane solution of **D-NP** ([Fig. 3d](#fig3){ref-type="fig"}) exhibited a monotonic decrease in the whole recorded range, which is similar to the spectra of a number of reported silver sulfide nanocluster samples.^[@cit3c],[@cit5b]^ The band gap energy could be fitted by using the Bardeen or Tauc equations (Fig. S7[‡](#fn2){ref-type="fn"}). The direct transition of **D-NP** was thus deduced to be 4.0 eV, largely blue-shifted relative to the band gap of bulk α-Ag~2~S.^[@cit6]^ It is noteworthy that modifying the composition ratio of different constituents in nanomaterials in order to perform band gap adjustment has been frequently applied in ternary and quaternary alloyed semiconductor nanomaterials, but rarely in binary systems.^[@cit20]^ The method reported herein represents a viable means to tune the band gap of binary nanomaterials independent of their size.
In order to clarify the relationship between the energy gap and the Ag/S ratio in silver sulfide clusters, we carried out a theoretical HOMO--LUMO gap calculation of silver sulfide clusters with different Ag/S ratios *via* a hybrid density functional theory (DFT) method (see calculation details in the ESI[‡](#fn2){ref-type="fn"}). It is highly challenging to construct a structural model and optimize the structure of a 4 nm Ag--S cluster. We thus carried out the calculation on four small Ag--S clusters (Ag~8~S~4~, Ag~10~S~5~, Ag~11~S~3~(OH)~5~ and Ag~12~S~3~(OH)~6~) with different Ag : S ratios ([Fig. 4](#fig4){ref-type="fig"}). The initial structures of the four clusters were built up according to the reported structural motifs of α-Ag~2~S^[@cit17]^ and the crystal structures of silver sulfide clusters.^[@cit10],[@cit11]^ For the sake of simplifying the structural optimization and energy gap computation, hydroxyl groups were used in substitution of the coordinated peripheral CF~3~SO~3~ ^--^ and CF~3~CO~2~ ^--^ anions in the calculation for silver sulfide clusters with an Ag/S ratio larger than two. As reflected in the calculated results (Table S1[‡](#fn2){ref-type="fn"}), the two clusters Ag~8~S~4~ and Ag~10~S~5~ with an Ag/S ratio of two have HOMO--LUMO energy gaps of 2.45 and 1.86 eV, respectively. In contrast, the other two size-comparable clusters Ag~11~S~3~(OH)~5~ and Ag~12~S~3~(OH)~6~ with an Ag/S ratio larger than 2 have corresponding energy gaps of 3.11 and 3.09 eV. These calculated results agree well with the trend in our above experiment. Compared with the two clusters Ag~8~S~4~ and Ag~10~S~5~, Ag~11~S~3~(OH)~5~ and Ag~12~S~3~(OH)~6~ with a higher Ag : S elemental ratio exhibited more localized HOMO--LUMO orbitals ([Fig. 4](#fig4){ref-type="fig"} and S8[‡](#fn2){ref-type="fn"}). The less dispersed orbitals in these two Ag--S clusters ultimately led to the band gap enlargement.
{#fig4}
Conclusions
===========
In summary, we have demonstrated a viable means of synthesizing silver sulfide nanomaterials directly from the bulk form with the assistance of coordinative macrocyclic ligands. This method could be successfully applied in the fabrication of other binary silver nanomaterials such as silver halides, acetylides *etc.*, based on our recent investigation. Considering that the size of macrocyclic ligands can dominate the nuclearity number of the obtained metal cluster aggregates, this approach can also be employed to achieve nanomaterials with different properties dependent on different nucleation centers. Following this study, we foresee the synthesis of numerous new nanomaterials with novel properties based on a synthetic revisiting inspired by the combination of bottom-up and top-down methods reported herein.
Financial support by the MOST (973 program, 2013CB834501 and 2011CB932501) and NNSFC (91127006, 21132005, 21121004) is gratefully acknowledged. This work is also supported by MOE (NCET-12-0296), Tsinghua University Initiative Scientific Research Program (2011Z02155) and Beijing Higher Education Young Elite Teacher Project (YETP0130). We are grateful to Profs. Mei-Xiang Wang and De-Xian Wang for their helpful discussion.
[^1]: †This work is dedicated to Professor Thomas C. W. Mak in celebration of his 78th birthday.
[^2]: ‡Electronic supplementary information (ESI) available: Synthetic procedures and crystal structure determination details. Analytical data, spectra, and images. X-ray crystallographic data for **1--4** in CIF format. CCDC [1006732](1006732), [1006826](1006826), [1006827](1006827) and [1006830](1006830). For ESI and crystallographic data in CIF or other electronic format see DOI: [10.1039/c4sc01884b](10.1039/c4sc01884b) Click here for additional data file. Click here for additional data file.
|
Day: December 26, 2015
If you’re buying a house soon, you may be mulling over the idea of getting an adjustable-rate mortgage. Or you were, until you heard about the Federal Reserve’s recent decision to raise interest rates a quarter point. That likely put a chill on many homeowners’ desires to have an adjustable-rate mortgage, also known as an ARM.
If you currently have an ARM, you might be in full-blown-panic mode, wondering if your interest rate is going to climb soon.
“My voicemail and email has been inundated by my clients, friends and partners all asking the same question, ‘What should I do about my ARM mortgage and when?'” says Drew Grandi, a loan originator with Wintrust Mortgage in Massachusetts.
What should you do? It really depends. An ARM can be a terrific strategy for paying a mortgage, or a terrible one. Before you get one, or get rid of one, you need to think about how you want to proceed.
What Is an ARM?
It’s a home loan with a fixed interest rate, usually for five years — but after that, it can adjust every year. (That’s why you’ll often hear ARMs referred to as a 5/1 ARM, although you could have a fixed interest rate for a different period, like a 7/1 ARM or 10/1 ARM.)
After those five or more years are up, the interest rate can go up or down for the duration of your mortgage.
Because the interest rate could go up, it can be risky to have an adjustable rate. Nobody wants an ARM to cost them an arm and a leg.
So why get an ARM if your monthly mortgage payment can turn on you like that? Because the fixed rate for those five years or so is lower than a traditional fixed mortgage rate. It hasn’t been all that much lower in recent years, of course, since all mortgage rates have been low. Still, even a percentage point can reduce a mortgage payment enough to save a homeowner thousands of dollars in the long run.
How High Can an ARM Go?
While your monthly mortgage payment can adjust every year to a higher and higher rate, there is a limit to how much financial pain you’ll endure.
“There are protective caps, so the loan cannot adjust higher than the designated annual cap or lifetime overall rate cap,” says Staci Titsworth, regional manager of PNC Mortgage in Pittsburgh. This is looked upon as insurance against risk.
“Most ARMs are capped so that your interest rate will not exceed more than 5 percent above your original rate,” Grandi says.
That doesn’t sound so bad, but it can add up. Grandi offers an example of the homeowner who has a 5/1 ARM at 3 percent on a $300,000 mortgage. That would mean you’re paying $1,264.81 a month for the first five years, he says. If interest rates shot up, the most you would pay is 8 percent on that $300,000, which would mean a max monthly payment of $2,201.29, or about $936 more than your original payment.
If you are thinking about an ARM, Titsworth suggests having the loan officer run a few examples of payments, including the worst-case-scenario payment. It may be eye-opening.
What if You Have an ARM Now?
Don’t panic, Grandi says. “Everyone currently in an ARM should not necessarily be hounding their mortgage expert to refinance into a fixed-rate mortgage,” he says.
In fact, if you have a low-rate ARM now and you refinance into a 30-year fixed-rate mortgage, you’d likely pay around 4 percent and your monthly payment would jump a little. With that previous $300,000 ARM example, Grandi says, the homeowner’s payment would go up less than $200 a month.
That may well be worth it to have the comfort of knowing you have a fixed mortgage payment. But if you’re planning to move in the next couple of years, you’re probably better off keeping the ARM. That’s because one of the biggest factors in whether you should get an ARM is how long you plan to live in your house. Generally, if you’re going to live in your home for a short time before selling it, an ARM is considered a financially shrewd move.
“I’m a big believer in ARM loans and have one now,” Titsworth says. “Adjustable rate mortgages are a good option for consumers that have a shorter-term need, and also those that are comfortable with a little risk,” she adds.
Who Shouldn’t Get an ARM?
Do what you want, but if you’d like some general rules of thumb, there are three types of homeowners who should likely avoid an ARM.
— First-time homebuyers. Ali Vafai, president of The Money Source, a national correspondent lender and mortgage loan servicer on New York’s Long Island, says first-time homebuyers or those with little down payment should not choose ARM loans. Since rates are near historic lows today, he says it’s very likely rates will be higher in five years and payments would increase after the fixed period. Even if you’re not planning to stay very long, maybe you’ll discover you hate moving and and realize you don’t want to go anywhere.
— People on a tight budget.So you scraped up your down payment, barely, and you figure you can afford to live in a house if you pare back your budget a bit. It sure doesn’t sound like you would do well if, in five years, your monthly mortgage payment shot up a couple hundred dollars a month.
— Natural-born worriers.As has been duly noted, ARMs are a risk. Before you get an ARM, ask yourself some risk-related questions, Grandi suggests.
For instance, when you’ve been living in your home for two years, will you suddenly have sleepless nights because you aren’t sure what your mortgage payment will be in three years?
“Do you expect continued doom and gloom for the United States’ economy with unemployment increasing and inflation staying low?” Grandi asks.
In other words, if you a worrier, the ARM is probably not for you.
Titsworth agrees. She loves the ARM, though, and points out what isn’t often emphasized: When your fixed rate ends and it adjusts, your monthly payment doesn’t necessarily have to go higher. “It’s possible the rate could drop,” she says.
Still, all in all, “ARM loans are typically not the product of choice for someone that believes they will be in their home long term and wants [the] peace of mind of knowing what their payment will be,” Titsworth says. “The long-term fixed rates come with less risk and therefore a higher rate.” |
Preparation of adriamycin gelatin microsphere-loaded decellularized periosteum that is cytotoxic to human osteosarcoma cells.
The purpose of this study was to develop a novel approach to treat bone osteosarcoma using a multipurpose scaffold aiming for local drug delivery. The slowly releasing microspheres was designed to deliver the chemotherapy drug adriamycin (ADM) and a decellularized (D) periosteum scaffold (which is known to be able to promote bone regeneration) was used to carry these microspheres. D-periosteum was obtained by physical and chemical decellularization. Histological results showed that the cellular components were effectively removed. The D-periosteum showed an excellent cytocompatibility and the ability to promote adhesion and growth of fibroblasts. Two kinds of slowly releasing microspheres, adriamycin gelatin microspheres (ADM-GMS) and adriamycin poly (dl-lactide-co-glycolide) gelatin microspheres (ADM-PLGA-GMS), were prepared and anchored to D-periosteum, resulting in two types of drug-releasing regenerative scaffolds. The effectiveness of these two scaffolds in killing human osteosarcoma cells was tested by evaluating cell viability overtime of the cancer cells cultured with the scaffolds. In summary, a gelatin/decellularized periosteum-based biologic scaffold material was designed aiming for local delivery of chemotherapy drugs for osteosarcoma, with the results showing ability of the scaffolds in sustaining release of the cancer drug and in suppressing growth of the cancer cells in vitro. |
ORLANDO, Fla. — Merida, the sweet, independent princess from “Brave,” will officially join the Disney Princess Royal Court on Saturday.
Translation from Disney-speak: Merida is about to get her glam on.
Off-the-shoulder gown. Eye-liner. Lipstick. Wild red curls tamed into voluminous sexy locks. A coy expression enhanced by her new, fuller lips.
This is the Kardashian-ization of the Disney Princess.
It can be subtle (Rapunzel) or it can look like cosmetic surgery. Cinderella now looks strangely like Taylor Swift, while poor Belle — I can’t decide if she looks more like Kim, Kourtney or Khloe.
Bottom line: When it comes to the cartoon marketing images Disney uses to sell products — everything from toys to clothes to makeup — the princesses rarely resemble the characters we know from the movies.
As a mom, I wanted to know two things: Why is this happening and what can I do about it?
I turned to Peggy Orenstein, the writer and cult hero among moms who can’t stand that everything in the girls’ aisle at Target is either pink or princess.
“It’s sad,” Orenstein said of Merida’s makeover. “I don’t know why they had to do that to her.”
Actually she does know. At least, she has a theory.
Disney is a master at capturing preschoolers. The 5-and-under set is like Play-Doh in the hands of the mouse marketers.
They also want to hold on to those girls at 8, 15 and beyond. So the diva quality gets amped up.
Disney just launched a new makeup line. Last year high-end shoe designer Christian Louboutin unveiled a Cinderella glass slipper with crystals, what looks like a 6-inch heel (no wonder those shoes came off when she ran from the ball) and Louboutin’s signature red sole. And there’s Disney’s line of princess-inspired wedding gowns.
The princess thing is no longer a little-girl phase.
“I’m waiting for the Snow White coffin to come out,” said Orenstein, whose book “Cinderella Ate My Daughter” examines what the princess culture does to the littlest girls.
She’s kidding. But I wouldn’t put it past them.
Disney took sweet Merida and made her into a medieval siren. Merida is the only one of Disney’s 11 princesses who doesn’t end up with Prince Charming in the end. She bucks tradition by refusing to let her parents marry her off.
That doesn’t mean she isn’t feminine. Or that she isn’t pretty. You can be pretty while holding a bow and arrow. It worked for Katniss Everdeen.
But she didn’t fit the princess template. In the movie Merida actually looked like the teenager that she’s supposed to be, and she didn’t wear makeup. Instead of falling into a prince’s arms she begins to cherish her relationship with her mother.
Is it any wonder moms loved this movie? Now, though, Merida has gone sultry.
Disney apparently plans to use the cartoon marketing image on various products.
That matters because those are the images we buy and bring into our homes. And the made-over image sends the message that Merida is better when she’s glammed up.
What’s a disenchanted mom to do?
Orenstein likes to say “fight fun with fun.”
For example, if you buy your daughter a Cinderella costume, chances are she’ll use it to pretend she’s Cinderella. But if you buy her a piece of silk, she can use it as a princess dress or any number of other make-believe games. She’ll use her imagination more and parrot Disney plot lines less.
Or, as Merida might say in her Scottish brogue, give your daughter a chance to “change her fate.”
Intuitive Surgical paid $30.4 million in cash on March 5 for a more than three-decade-old building about two blocks from its current headquarters, which are on Kifer Road in Sunnyvale, according to Santa Clara County property records.
House Oversight and Government Reform Committee Chairman Elijah Cummings, D-Md., said in a letter sent Thursday to White House Counsel Pat Cipollone that the administration has failed to produce documents tied to Kushner and other officials despite requests from the committee since 2017. |
The Big Buck Hunter Summer Training Tour party
challenged players across the city to make their favorite bar/restaurant the
top hunting lodge. Stats' customers responded, playing more than 2,300 rounds
(about 75 players per day) during July.
The Big Buck Hunter Summer Training Tour party challenged players across the city to make their favorite bar/restaurant the top hunting lodge. Stats' customers responded, playing more than 2,300 rounds |
Now that the Twins and Marlins both finally have new ballparks, take a look back at what it took to get them.
While looking toward the future with our comprehensive slate of current content, we'd also like to recognize our rich past by drawing upon our extensive (and mostly free) online archive of work dating back to 1997. In an effort to highlight the best of what's gone before, we'll be bringing you a weekly blast from BP's past, introducing or re-introducing you to some of the most informative and entertaining authors who have passed through our virtual halls. If you have fond recollections of a BP piece that you'd like to nominate for re-exposure to a wider audience, send us your suggestion.
Stumping for a new stadium in Minneapolis and Miami used to be an annual rite of spring, but this year both the Twins and Marlins will be playing in flashy new facilities. That outcome wasn't so certain when Neil wrote the following article, which originally ran on May 4, 2005.
Is Mets owner Fred Wilpon really prepared to cut off his nose to spite the Madoff trustee's face?
Was it really only six months ago that Mets fans were hailing the arrival of Sandy Alderson as putting an end to one of the grimmest eras in a team history full of grimmage? Finally, the Omar Minaya epoch was at an end, and with it the days of throwing money at Oliver Perezes and Luis Castillos; from now on, the Mets could spend their cash reserves wisely, and leverage their big media market and their core of young(ish) talent to bring October baseball back to Flushing.
That plan essentially went out the window on the February day when Irving Picard, the trustee for the former clients of Ponzi schemer Bernie Madoff, announced that he was suing Mets owners Fred Wilpon and Saul Katz for $1 billion, on the grounds that they knew—or should have known—that his investment empire was built on fraud. As I wrote at the time, this shouldn't have had much impact on the Mets' finances—the team was still in decent financial shape, after all (even after a big dip in value, still the fifth-most valuable franchise in baseball, according to Forbes, with net profits over the last five years of more than $100 million)—and however the suit is resolved, it shouldn't hamstring the team's finances: Either the Wilpons would successfully fight off Picard's suit, in which case the threat was moot, or they'd lose, in which case they'd inevitably have to sell the team to pay the fine, and the question of whether or not to re-sign Jose Reyes would be a question for Mark Cuban, or a Dolan to be named later.
The rest of this article is restricted to Baseball Prospectus Subscribers.
Not a subscriber?
Click here for more information on Baseball Prospectus subscriptions or use the buttons to the right to subscribe and get access to the best baseball content on the web.
Does inviting more also-ran teams to playoff ball REALLY provoke higher player spending?
Economic cause-and-effect is a funny thing. Last week, Matt Swartz laid out the reasons why the proposed addition of an extra wild-card team in each league could end up enriching the players at the expense of the owners. It's a long argument and worth reading, but the nut of it comes down to: More wild cards equal more teams in the playoff hunt, teams in the playoff hunt are more likely to bid up player salaries, and so shoehorning two more teams into October, even for a single game, is likely to drive salaries skyward. As he wrote: "In the late 1980s and early 1990s, players earned only about 30 percent of league revenues, but from the mid-1990s through the present day they have taken in roughly 50 percent, and sometimes more." The apparent tipping point: 1995, the first year of the expanded playoffs.
Matt's article caught my eye for a couple of reasons. First off, as should be clear by now, I find it endlessly fascinating how tweaks to playoff systems can result in unexpected consequences. Second, I'm a bit of an apostate from the church of rational economic actors. I, too, once argued that teams only spend what players are likely to be worth in terms of new revenues—if A-Rod is getting $30 million a year, it's because somebody thinks he's likely to generate more than $30 million in fannies in seats, jerseys on torsos, and beers in guts. Since then, though, I've since seen too many GMs spending up to arbitrary "budgets" and then stopping—as if the goal is to come home from the Winter Meetings having spent all the money their moms gave them without going over—to really feel confident that there's anything rational about it.
Are April's record-low attendance marks a sign that the ticket bubble has burst?
The young baseball season is already shaping up to be lots of things—the Year of the Great Red Sox Collapse, maybe, or the Year of the Exploding Appendices—but one theme that might actually survive small-sample goofiness to have some legs is the Year the Fans Went Away. MLB attendance has been gradually sliding ever since its peak in 2007, but the early signs this year have been pretty alarming:
For years, sports economists treated the Forbes numbers as kind of a business-side equivalent to fielding stats: probably not all that accurate, but worth looking at because, hey, they're all we've got. All of that changed, though, after last summer's Leakgate, in which internal MLB documents leaked to Deadspin revealed the financial details for several MLB teams—and the income numbers matched the Forbes figures almost exactly. All those team execs who'd been complaining that the Forbes figures didn't reflect their actual losses—like the Florida Marlins' David Samson, who griped in 2007 that, "They look at revenue sharing numbers and the team's payroll and take the difference and see profit without looking at our expenses"—were, it turned out, blowing smoke.
Will MLB.tv ever make your home team's games available for web viewing?
Living in the future has its advantages. Back when I was a kid, in the late Pleistocene, catching a ballgame remotely meant either watching your local teams on TV or, if you were away from your living room, listening on the radio; maybe if you were very lucky and it was late at night and the ionosphere was aligned just right, you might be able to just barely tune in something that might possibly be Ernie Harwell on an out-of-town broadcast. Today, anyone with $99.99 burning a hole in their credit card ($119.99 if you want DVR-style gewgaws like fast-forward and rewind) can sign up for MLB.tv and watch any game, whether spring training, regular season, or postseason, on their computer, iPad, smartphone, or PlayStation 3—I'm sure that right this moment someone somewhere at MLB Advanced Media is working on an app that will stream hi-def baseball video live to the dashboard display of your flying car, just as soon as those are invented.
Any game, that is, unless it's one involving your local team. In that case, you're still stuck with 20th-century technology, and either tethered to your TV or forced to stick with audio. Any attempt to do otherwise will result in that dreaded message familiar to MLB.tv users: "We're sorry. Due to your current location you are blacked out of watching the game you have selected...."
Amid the horrifying images coming out of northeast Japan today, the repercussions of the Sendai quake are starting to be felt in the baseball world. Some of the reports so far that have been compiled by BP's writers:
Mix one Hank Steinbrenner comment, the Mets' money woes, and the A's and Rays' stadium situations, and suddenly it's 2001 all over again.
This time, it seems, it started with Ken Rosenthal. Two days after Hank Steinbrenner let fly with an attack on baseball's revenue-sharing plan that concluded, "if you don’t want to worry about teams in minor markets, don’t put teams in minor markets, or don’t leave teams in minor markets if they’re truly minor," Rosenthal penned a Fox Sports Exclusive that significantly upped the ante: "Don't be surprised if the “C” word—contraction—returns to the baseball lexicon soon," he wrote, noting that he'd been "hearing rumblings" that "certain big-market teams" wanted to whack the Rays and A's. In one scenario, wrote Rosenthal, Rays owner Stuart Sternberg would end up buying the Mets from the troubled Wilpons, while A's owner Lew Wolff did the same with the McCourt-wracked Dodgers, before watching their old teams go poof.
From there, it was off to the races, as every sportswriter with a slow news day grabbed Rosenthal's unsourced speculation and ran with it. In the St. Petersburg Times, John Romano wrote a column headlined "Threat to contract Tampa Bay Rays may be gaining credibility," in which he concluded that while the Rays probably wouldn't disappear overnight, "whether you want to acknowledge it or not, Tampa Bay is now on the clock"—one that he insisted could strike midnight in 2017, when Tropicana Field is paid off. CBS Sports' Ray Ratto fired back that contraction was not just a terrible idea, but a sign of America's cultural decline. (So far as I can understand it, this has something to do with bar fights and the CalTech basketball team.) The New York Daily News' Bill Madden, citing "one high-level baseball source," wrote that both A's owner Lew Wolff and Rays owner Stuart Sternberg "told Selig they are not prepared to continue operating under the present circumstances. Translation: 'If we can't get new stadiums, buy us out.'"
Could Bud Selig's plan to cram in more playoff teams have a silver lining?
Somewhere among the piles of spiral-bound notebooks stacked in my closet lies a short-lived diary titled "The Last Pennant Race." It recounts the day-by-day events of the last two months of the 1993 Yankees season, of which pretty much all I can remember is, first, that the Yankees managed to tie the eventual champion Blue Jays for first place roughly three dozen times, but never managed to take the lead on their own, and second, that in one late-season game, Don Mattingly, presaging the Jeffrey Maier incident by three years, got credit for a key home run despite it being caught by a fan leaning so far into the field of play that he could have shaken hands with the second baseman.
I chose the diary's title not because I was pessimistic about the Yankees' future—after ten years of Andy Hawkins and Torey Lovullo, I could see as well as anyone that players like Bernie Williams and Paul O'Neill were headed for bigger things—but because I knew that the term "pennant race" would never again have the same meaning. That's because it had already been announced that 1993 was the final season under the old four-division system; henceforth, the leagues were to be split in six, and wild cards would be born. (Thanks to the player strike that would wipe out the 1994 postseason, they were not actually baptized until the following season.) |
-780*s + 4396. Is 267 a factor of a(-6)?
False
Let h = 10 - -2. Let f be (208/(-6))/(h/(-18)). Let p = -8 + f. Does 11 divide p?
True
Let m(n) = 760*n - 674. Does 46 divide m(7)?
True
Let a be 10 - 1 - (-2)/((-6)/(-9)). Suppose -a*b - 22*b = -4896. Is 8 a factor of b?
True
Let w(v) = -v**3 + 2*v**2 - 309*v - 3267. Is 2 a factor of w(-11)?
False
Let a(g) = 19*g + 8. Let k = -78 + 75. Let r be a(k). Let l = 63 + r. Does 4 divide l?
False
Let x = 8414 + 32315. Does 33 divide x?
False
Suppose 2*b = 2*q + 4486, -5*b - 23*q + 11165 = -18*q. Suppose -14*n = -2788 - b. Is n a multiple of 8?
False
Suppose -772186 = -162*u + 5319014. Is u a multiple of 188?
True
Let b(d) = -65*d - 119. Let u be b(-7). Suppose 6*i - 9*i + u = 0. Does 3 divide i?
False
Suppose -5*c + 38579 = 4*x, 0*c = 25*x + 4*c - 241146. Does 182 divide x?
True
Suppose 77*b = 540545 + 628546. Does 30 divide b?
False
Let j(i) = -i**3 - 18*i**2 - 13*i + 60. Let m be j(-17). Is (26/m)/(-5*7/1540) a multiple of 3?
False
Let b = 4427 + 449. Is 2 a factor of b?
True
Let y be (23/(-10) - (-4)/8)*-5. Suppose k = -5*l + 382, -7*k = -y*k - l + 791. Suppose n - 3*o = 3*n - 161, k = 5*n + 2*o. Is 5 a factor of n?
False
Suppose -23*m + 3531 = -12*m. Suppose -28*p = -k - 24*p + m, 5*k = 3*p + 1639. Is 89 a factor of k?
False
Let w = 8640 - 4514. Is 67 a factor of w?
False
Let p(o) = -o**3 - 8*o**2 - 8*o - 2. Let q be p(-7). Suppose -q*j + 21 = -12*j. Does 12 divide -4 + (j - 146*-1)?
False
Suppose 0 = -4*o - v + 925, o - 3*v - 225 = -2*v. Suppose 3*t - o - 22 = 0. Is 4 a factor of t?
True
Suppose 0 = -4*s - 70 + 82. Suppose -s*u - 4 = -5*u. Suppose -u*i = i - 39. Is i a multiple of 13?
True
Let u(t) = -22130*t**3 + 33*t**2 + 33*t. Is 39 a factor of u(-1)?
False
Let a = -81 + 270. Let j = 206 - a. Is 4 a factor of j?
False
Suppose 3*n - 3*t + 0*t - 3 = 0, n + 4*t = -4. Suppose 3*r + 1 - 4 = n. Does 8 divide r/(-5) + (-618)/(-15)?
False
Let p(f) be the first derivative of -25*f**3/3 + f**2 + f - 27. Let g be p(-2). Let t = g - -188. Is 5 a factor of t?
True
Is (-16 - -766)/(2 - 1) a multiple of 75?
True
Let w be 146/(-1)*((-3)/(-6) + -1). Let m = w - -363. Suppose -1723 + m = -13*q. Does 36 divide q?
False
Let u(d) = -18*d + 2 + 22*d - 11*d. Is u(-7) a multiple of 15?
False
Does 113 divide (-204)/170 - (-82991)/5?
False
Suppose 2*y = -4*v + 10486, -4*v - 420*y = -425*y - 10493. Is 43 a factor of v?
False
Is 38 a factor of ((-8957)/(-65) - -5)/(870/(-175) + 5)?
False
Suppose -k = -6*j - 9212, 3*k + k + 2*j - 36874 = 0. Suppose -2*s - 20*s + k = 0. Does 37 divide s?
False
Let j(g) = 2*g**2 + 20*g + 1044. Is 9 a factor of j(-63)?
True
Is 39 a factor of ((-1 - 1)/(800/(-3100)))/((-2)/(-1888))?
False
Suppose 195*b = 207*b - 504. Suppose 2*x - 30 = -10. Let o = b + x. Is o a multiple of 26?
True
Suppose 0 = 5*j - 5, 8640*t + 161611 = 8643*t - 2*j. Is t a multiple of 65?
False
Let g = 313 + -318. Is 18 a factor of (3563/(-14) - g)/(1/(-2))?
False
Let v = 9 - 7. Suppose 5*d + 2 = v*x - 0, 0 = -4*x + 4. Suppose -n - 2 + 20 = d. Is n a multiple of 3?
True
Let w = 47 + -31. Let h = 15 - w. Is (4 - h) + -3 + (-148)/(-1) a multiple of 30?
True
Let k = 418 + -426. Let t(z) = -11*z + 66. Does 11 divide t(k)?
True
Let y = 810 - -5648. Is y a multiple of 57?
False
Let v = 413 - 108. Let l = v + 73. Does 21 divide l?
True
Let r(p) = -p**3 + 3*p**2 + 17*p - 13. Let h be r(4). Suppose -8820 = 11*f - h*f. Is f a multiple of 8?
False
Suppose 0 = 3*p - 7*p + 84. Let q = 25 - p. Suppose 5*k + 36 = 2*w - 29, 20 = -q*k. Does 7 divide w?
False
Suppose 87*c - 17*c - 430 = 1180. Is c even?
False
Let v = -47 - -36. Let r be 0 + 2/v + 144/66. Suppose -5*a = -2*o - 321, -r*o - 2 = -6. Does 13 divide a?
True
Let f = 428 - 405. Does 5 divide ((-766)/4)/(((-322)/28)/f)?
False
Suppose 3*i + 7257 = 23543 + 49858. Does 23 divide i?
False
Suppose -4616 = 2*m - 2*f - 38622, 5*m - 85015 = -4*f. Is m a multiple of 74?
False
Let t(j) = -18*j - 13. Let y be (-6)/(-2*(-20)/8 - 4). Let d be t(y). Is 19 a factor of d*(-4)/40*-4?
True
Let t(i) = -i**3 + 20*i**2 - 30*i - 17. Suppose 118 - 27 = 7*w. Does 50 divide t(w)?
False
Let s be (-6)/16 - 513/(-152). Suppose o + s + 3 = 0. Let x(t) = -12*t. Is 24 a factor of x(o)?
True
Let i(a) = 3*a**2 + 2*a - 12. Let b be i(-14). Suppose 11*y - 178 = b. Is 6 a factor of y?
True
Let z = 212 + -470. Let u = 370 + z. Does 7 divide u?
True
Let v(h) = -h**3 + 9*h**2 - 11*h + 7. Suppose -3*p + 32 = 2*y, -3*p + 2*p - 12 = -5*y. Let a be v(p). Is 13 a factor of (-2)/(-17) - 2531/a?
False
Suppose 23*s + 3*a - 4618 = 22*s, -5*s - 4*a = -23024. Is 92 a factor of s?
True
Let f = 8732 - 4838. Is f a multiple of 66?
True
Let o = -753 - -2126. Let f = o + -784. Is f a multiple of 66?
False
Let a = -38 - -39. Let u be a/3 - (-20)/12. Suppose 0 = -2*w + 3*w - u*f - 3, -5*f = w - 24. Does 5 divide w?
False
Suppose 3*f + 10 = -3*m - 2*f, 5*m - 30 = f. Suppose m*s + 15 = 15. Suppose s*g - 2*g + 596 = 0. Is 40 a factor of g?
False
Suppose 35*f - 6*f = -17*f + 461656. Is 26 a factor of f?
True
Let f(d) = -23*d**3 + d**2 - d - 1. Let q be f(-1). Suppose q = -4*b - v - 0*v, 2*b + 12 = -4*v. Is b + 2/1 - (-23 + 3) a multiple of 8?
True
Let n(w) = 3*w**2 + 5*w + 2. Suppose u - 3*p + 14 = 0, 0 = 3*p + 2*p - 20. Let k be n(u). Suppose -49 = -h - 5*m, 3*h - k*m = h + 98. Is 6 a factor of h?
False
Let h = 9 + -9. Suppose h = -2*l - 3*l + 45. Suppose -l*b + 120 = -4*b. Is b a multiple of 12?
True
Let b = 16120 + -10982. Does 14 divide b?
True
Let u = -2921 - -3277. Is u a multiple of 89?
True
Let j(n) = 2*n**2 - 23*n - 110. Let f = 282 + -298. Does 70 divide j(f)?
True
Suppose 2*i = 3*i + 8. Let y be (i/6)/(2/(-24)). Suppose -y*b + 12*b = -96. Is 12 a factor of b?
True
Is 72 a factor of (312/(-40)*1)/(1 + (-482)/480)?
True
Let l(i) = i**2 - 9*i + 9. Let m be l(8). Let v(j) = 161*j**3 + 2*j**2 - 1. Let d be v(m). Let g = d - 110. Is g a multiple of 4?
True
Let o be (5 + -6)*1564/(8/(-2)). Let q = o - 259. Is 14 a factor of q?
False
Does 74 divide -1 + 2 - (2 + -7964)/3?
False
Let l(r) be the first derivative of 7*r**3/3 + 5*r**2/2 + 12*r - 67. Let v = 8 - 11. Is l(v) a multiple of 9?
False
Let u(z) = -2*z**3 - 45*z**2 - 500*z - 14. Is 137 a factor of u(-16)?
True
Suppose 148 + 90 = 7*x. Suppose z = -z - 38. Let p = x - z. Does 13 divide p?
False
Suppose -9*x + 6*x + 1800 = 0. Suppose 0 = 22*q - 14*q - x. Is 25 a factor of q?
True
Let y = 424 - 353. Let h(t) = t**3 - 4*t**2 + 4*t + 1. Let f be h(3). Suppose 0 = f*l - 2*g - 519 - y, 2*l + 5*g = 325. Is l a multiple of 15?
True
Let u(i) = i + 11. Let z be u(-11). Let w be -54 + (-2)/(-4)*z. Let d = w + 83. Is d a multiple of 19?
False
Suppose 4*t = 3*d - 11, 0 = -5*t - 3*d - 0*d - 34. Let f(b) = -b**3 - 3*b**2 - 2*b + 3. Is 7 a factor of f(t)?
True
Suppose q - 3*q = -4. Let c = -1760 - -2612. Is q/(8/c)*1 a multiple of 14?
False
Let f(y) be the third derivative of -26/3*y**3 - 2/3*y**4 - 23*y**2 + 0*y + 0. Is 14 a factor of f(-12)?
True
Let u = 14956 - 11177. Is 22 a factor of u?
False
Let n be (4/8)/(6/(-216)). Suppose 5*i + 1062 = -i. Is ((-12)/n)/((-2)/i) a multiple of 10?
False
Let o = -208 - -262. Is (o/(-36))/((-1)/24) a multiple of 3?
True
Suppose 164*r = 170*r. Suppose r = -4*k + 2*k - 10, 4*t - 1685 = k. Is t a multiple of 19?
False
Let u(b) = b**3 - 10*b**2 - 6*b - 16. Let d(c) be the first derivative of c**4/2 - 7*c**3 - 6*c**2 - 33*c - 12. Let w(y) = 2*d(y) - 5*u(y). Does 18 divide w(7)?
False
Let d be 6290*7*3/42. Let u = d + -1906. Is u a multiple of 21?
True
Let r = 3246 - -6257. Is 85 a factor of r?
False
Let p = 3901 - 1181. Does 44 divide p?
False
Suppose -1693 - 1817 = -5*r. Suppose l - 5*l = 8, -r = -2*o - 3*l. Is 49 a factor of o?
False
Suppose 2*y + 1077 = -h + 4291, 2*h + 8*y - 6432 = 0. Does 22 divide h?
True
Suppose 23 = 12*r - 37. Suppose -r*m - 42 = -547. Does 8 divide m?
False
Let o(v) = v**2 + 2*v + 1. Let i be o(3). Let p = 134 - i. Does 64 divide p?
False
Let x = 401 + 1261. Does 3 divide x?
True
Let m(g) = -g**2 - 28*g + 30. Let u = 86 - 103. Does 36 divide m(u)?
False
Suppose -260*o = -265*o + 2*w + 15154, -2*o + 6054 = 3*w. Is o a multiple of 4?
False
Is 9 a factor of 2/4*1944*(12 + -1 + -10)?
True
Let o = 203 + -200. Suppose x + 4*w - 204 = 0, -889 + 86 = |
WHAT IT IS
Accenture Research Life Sciences Cloud is a cloud-based informatics platform designed to help scientific research-intensive organizations in the life sciences industry improve productivity, efficiency and innovation in the early stages of drug development.
HOW IT WORKS
The Accenture Research Life Sciences Cloud enables life sciences researchers and informatics professionals to quickly aggregate, access and analyze research data from multiple applications. The data are accessible from a single interface, with integrated workflow, reporting and analytics capabilities. Incorporating a modern user interface and a secure, multi-tenant environment, the platform enables easier collaboration across the R&D enterprises, including with external partners.
Realize the business value of adopting AWS ...securely, at speed and scale.
WHO WE ARE
Brad Michel is a Managing Director in Accenture’s global Accelerated R&D Services group. He is responsible for driving Accenture’s strategy, offerings and services across the Life Sciences R&D functions, from research through late stage development, and into the early stages of commercialization, launch and patient services. Prior to this role, Brad had responsibility for some of Accenture’s largest R&D client accounts.
Working in Life Sciences his entire career, Brad has a broad background in business advisory and management consulting, IT strategy and solution delivery, and outsourcing operations across Pharmaceutical R&D. His focus is on helping clients drive transformation through innovation, operational efficiency, and cost savings.
He has a Bachelor’s degree in Computer Engineering from Villanova University, with minors in Business and Computer Science. He lives in suburban Philadelphia with his wife and three sons.
Joe Donahue is a Managing Director with Accenture’s Accelerated Research and Development Services business where he leads Accenture’s global life sciences research practice. Mr. Donahue has more than twenty-five years of executive level entrepreneurial and Board experience in global life sciences and technology companies and private equity firms. Prior to joining Accenture, his roles included Senior Vice President at BioReference Laboratories / GeneDx, the third largest diagnostics testing laboratory in the world, Senior Vice President Life Sciences with Thomson Reuters Intellectual Property & Science, as well as leadership positions in, or as a Board member, to several early-stage life sciences research, informatics and analytics companies. He has degrees in Medicinal Chemistry and Computer Science from Villanova University in Villanova, PA.
Jens Hoefkens is a Principal Director in Accenture’s global Accelerated R&D Services group. He is responsible for the strategy of Accenture’s Research Life Sciences Cloud (RLSC) and directs the platform’s product management group. In addition, he manages partner relationships with Accenture Research Life Sciences Cloud.
Working in Life Sciences his entire career, Jens has a broad background in pharmaceutical research, translational medicine, and pre-clinical development. He has been working with world-leading pharma, biotech, and agriculture companies and academic research centers and has been responsibility for a wide-ranging portfolio of successful enterprise informatics solutions, including Genedata Expressionist, PerkinElmer Signals, and TIBCO Spotfire.
Jens holds a dual major Ph.D. in Mathematics and Physics from Michigan State University. He lives in the Boston area with his wife and two children.
Anthony Sokolnicki is a Sr. Principal in Accenture’s global Accelerated R&D Services group. He is responsible for driving technology and platform strategy and offerings across life sciences. Anthony is currently the lead architect for the Accenture Research Life Sciences Cloud platform, responsible for driving the technology strategy and vision for the capability.
Anthony has worked with leading-edge technologies throughout his career. Previously he was part of Accenture’s Technology Labs where he focused on technology innovation applied to industries ranging from financial services to resource management. Anthony has a Bachelor’s degree in Mechanical Engineering from the University of Washington and a MBA from the University of Oklahoma with an emphasis in MIS and Finance.
Mike Stapleton is responsible for the life sciences business service strategy at Accenture, driving the execution of a five year strategy and business plan, working across the line of business, platform and domain leaders to develop and execute the strategies necessary to grow the life science business, including M&A, alliances, transformative deal shapes, and industry adjacencies.
Prior to joining Accenture, Mike held the position of VP & CIO, R&D IT at Merck (MSD), where he had global responsibility for all IT and informatics in support of Merck Research Laboratories (MRL) from discovery, through clinical, regulatory, safety and RWE. At Merck he most recently led a broad ranging 5 year strategy and talent planning initiative for MRL IT and drove cross-industry thought leadership on pre-competitive business models. For over twenty years prior to Merck, Mike led scientific businesses serving the pharmaceutical, biotech and research industries; as GM Informatics & VP Growth and Innovation at Perkin Elmer; VP Informatics, Marketing and eBusiness at Life Technologies and as EVP & COO at Accelrys. Mike has performed scientific research in industry at British Petroleum and in academia at Cornell University; he has a Ph.D. in Computational Chemistry and is a Fellow of the Royal Society of Chemistry.
Brad Michel is a Managing Director in Accenture’s global Accelerated R&D Services group. He is responsible for driving Accenture’s strategy, offerings and services across the Life Sciences R&D functions, from research through late stage development, and into the early stages of commercialization, launch and patient services. Prior to this role, Brad had responsibility for some of Accenture’s largest R&D client accounts.
Working in Life Sciences his entire career, Brad has a broad background in business advisory and management consulting, IT strategy and solution delivery, and outsourcing operations across Pharmaceutical R&D. His focus is on helping clients drive transformation through innovation, operational efficiency, and cost savings.
He has a Bachelor’s degree in Computer Engineering from Villanova University, with minors in Business and Computer Science. He lives in suburban Philadelphia with his wife and three sons.
Joe Donahue is a Managing Director with Accenture’s Accelerated Research and Development Services business where he leads Accenture’s global life sciences research practice. Mr. Donahue has more than twenty-five years of executive level entrepreneurial and Board experience in global life sciences and technology companies and private equity firms. Prior to joining Accenture, his roles included Senior Vice President at BioReference Laboratories / GeneDx, the third largest diagnostics testing laboratory in the world, Senior Vice President Life Sciences with Thomson Reuters Intellectual Property & Science, as well as leadership positions in, or as a Board member, to several early-stage life sciences research, informatics and analytics companies. He has degrees in Medicinal Chemistry and Computer Science from Villanova University in Villanova, PA.
Jens Hoefkens is a Principal Director in Accenture’s global Accelerated R&D Services group. He is responsible for the strategy of Accenture’s Research Life Sciences Cloud (RLSC) and directs the platform’s product management group. In addition, he manages partner relationships with Accenture Research Life Sciences Cloud.
Working in Life Sciences his entire career, Jens has a broad background in pharmaceutical research, translational medicine, and pre-clinical development. He has been working with world-leading pharma, biotech, and agriculture companies and academic research centers and has been responsibility for a wide-ranging portfolio of successful enterprise informatics solutions, including Genedata Expressionist, PerkinElmer Signals, and TIBCO Spotfire.
Jens holds a dual major Ph.D. in Mathematics and Physics from Michigan State University. He lives in the Boston area with his wife and two children.
Anthony Sokolnicki is a Sr. Principal in Accenture’s global Accelerated R&D Services group. He is responsible for driving technology and platform strategy and offerings across life sciences. Anthony is currently the lead architect for the Accenture Research Life Sciences Cloud platform, responsible for driving the technology strategy and vision for the capability.
Anthony has worked with leading-edge technologies throughout his career. Previously he was part of Accenture’s Technology Labs where he focused on technology innovation applied to industries ranging from financial services to resource management. Anthony has a Bachelor’s degree in Mechanical Engineering from the University of Washington and a MBA from the University of Oklahoma with an emphasis in MIS and Finance.
Mike Stapleton is responsible for the life sciences business service strategy at Accenture, driving the execution of a five year strategy and business plan, working across the line of business, platform and domain leaders to develop and execute the strategies necessary to grow the life science business, including M&A, alliances, transformative deal shapes, and industry adjacencies.
Prior to joining Accenture, Mike held the position of VP & CIO, R&D IT at Merck (MSD), where he had global responsibility for all IT and informatics in support of Merck Research Laboratories (MRL) from discovery, through clinical, regulatory, safety and RWE. At Merck he most recently led a broad ranging 5 year strategy and talent planning initiative for MRL IT and drove cross-industry thought leadership on pre-competitive business models. For over twenty years prior to Merck, Mike led scientific businesses serving the pharmaceutical, biotech and research industries; as GM Informatics & VP Growth and Innovation at Perkin Elmer; VP Informatics, Marketing and eBusiness at Life Technologies and as EVP & COO at Accelrys. Mike has performed scientific research in industry at British Petroleum and in academia at Cornell University; he has a Ph.D. in Computational Chemistry and is a Fellow of the Royal Society of Chemistry.
Select your location
We were unable to find a match for \\"$searchstring.\\" Try searching again by using different or more general keywords and check for spelling errors.
RECOMMENDED CONTENT
FILTER RESULTS
FILTER RESULTS
Connect with our Talent Community
Personalize your Accenture Career search and receive tailored news, insights and job alerts. Join our Talent Connection to learn more about the challenging and rewarding career opportunities offered by Accenture. |
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\\Component\\Translation\\Tests\\Loader;
use PHPUnit\\Framework\\TestCase;
abstract class LocalizedTestCase extends TestCase
{
protected function setUp()
{
if (!\\extension_loaded('intl')) {
$this->markTestSkipped('Extension intl is required.');
}
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.